A Regulatory Framework for the Internet (with Thanks to Ben Thompson)
Summarizing Ben Thompson of Stratechery, plus my own targeted proposals
“A Regulatory Framework for the Internet,” Ben Thompson’s masterly framework, should be required reading for all regulators, as well as anyone concerned about tech and society. (Stratechery is one of the best tech newsletters, well worth the subscription price, but this article is freely accessible.)
I hope you will read Ben’s full article, but here are some points that I find especially important, followed by the suggestions I posted on his forum (which is not publicly accessible).
Part I — Highlights from Ben’s Framework (emphasis added)
Opening with the UK government White Paper calling for increased regulation of tech companies, Ben quotes MIT Tech Review about the alarm it raised among privacy campaigners, who “fear that the way it is implemented could easily lead to censorship for users of social networks rather than curbing the excesses of the networks themselves.”
Ben identifies three clear questions that make regulation problematic:
First, what content should be regulated, if any, and by whom?
Second, what is a viable way to monitor the content generated on these platforms?
Third, how can privacy, competition, and free expression be preserved?
Exploring the viral spread of the Christchurch hate crime video, he gets to a key issue:
What is critical to note, though, is that it is not a direct leap from “pre-Internet” to the Internet as we experience it today. The terrorist in Christchurch didn’t set up a server to livestream video from his phone; rather, he used Facebook’s built-in functionality. And, when it came to the video’s spread, the culprit was not email or message boards, but social media generally. To put it another way, to have spread that video on the Internet would be possible but difficult; to spread it on social media was trivial.
The core issue is business models: to set up a live video streaming server is somewhat challenging, particularly if you are not technically inclined, and it costs money. More expensive still are the bandwidth costs of actually reaching a significant number of people. Large social media sites like Facebook or YouTube, though, are happy to bear those costs in service of a larger goal: building their advertising businesses.
Expanding on business models, he describes the ad-based platforms as “Super Aggregators:”
The key differentiator of Super Aggregators is that they have three-sided markets: users, content providers (which may include users!), and advertisers. Both content providers and advertisers want the user’s attention, and the latter are willing to pay for it. This leads to a beautiful business model from the perspective of a Super Aggregator:
Content providers provide content for free, facilitated by the Super Aggregator
Users view that content, and provide their own content, facilitated by the Super Aggregator
Advertisers can reach the exact users they want, paying the Super Aggregator
…Moreover, this arrangement allows Super Aggregators to be relatively unconcerned with what exactly flows across their network: advertisers simply want eyeballs, and the revenue from serving them pays for the infrastructure to not only accommodate users but also give content suppliers the tools to provide whatever sort of content those users may want.
…while they would surely like to avoid PR black-eyes, what they like even more is the limitless supply of attention and content that comes from making it easier for anyone anywhere to upload and view content of any type.
…Note how much different this is than a traditional customer-supplier relationship, even one mediated by a market-maker… When users pay they have power; when users and those who pay are distinct, as is the case with these advertising-supported Super Aggregators, the power of persuasion — that is, the power of the market — is absent.
He then distinguishes the three types of “free” relevant to the Internet, and how they differ:
“Free as in speech” means the freedom or right to do something
“Free as in beer” means that you get something for free without any additional responsibility
“Free as in puppy” means that you get something for free, but the longterm costs are substantial
…The question that should be asked, though, is if preserving “free as in speech” should also mean preserving “free as in beer.”
Platforms that are paid for by their users are “regulated” by the operation of market forces, but those that are ad-supported are not, and so need external regulation.
Ben concludes that:
…platform providers that primarily monetize through advertising should be in their own category: as I noted above, because these platform providers separate monetization from content supply and consumption, there is no price or payment mechanism to incentivize them to be concerned with problematic content; in fact, the incentives of an advertising business drive them to focus on engagement, i.e. giving users what they want, no matter how noxious.
This distinct categorization is critical to developing regulation that actually addresses problems without adverse side effects
…from a theoretical perspective, the appropriate place for regulation is where there is market failure; constraining the application to that failure is what is so difficult.
That leads to Ben’s figure that brings these ideas together, and delineates critical distinctions:
I agree completely, and build on that with my two proposals for highly targeted regulation…
Part II — My comment on the Statechery Forum (including some portions that were abridged to meet character limits):
Elegant model, beautifully explained! Should be required reading for all regulators.
FIRST: I suggest taking this model farther, and mandating that the “free beer” ad-based model be ratcheted away once a service reaches some critical level of scale. That would solve the problem — and address your concerns about competition.
Why don’t we regulate to fix the root cause? The root cause of Facebook’s abuse of trust is its business model, and until we change that, its motivations will always be opposed to consumer and public trust.
Here is a simple way to force change, without over-engineering the details of the remedy. Requiring a growing percentage of revenue from users is the simplest way to drive a fundamental shift toward better corporate behavior. Others have suggested paying for data, and I suggest this is most readily done in the form of credits against a user service fee. Mandating that a target level of revenue (above a certain level) come from users could drive Facebook to offer such data credits, as a way to meet their user revenue target (even if most users pay nothing beyond that credit). We will not motivate trust until the user becomes the customer, and not the product.
There is a regulatory method that has already proven its success with a similarly challenging problem — forcing automakers to increase the fuel efficiency of the cars they make. The US government has for years mandated staged multi-year increases in Average Fuel Efficiency. This does not mandate how to fix things. It mandates a limit on the systems that have been shown to cause harm. Facebook and YouTube can determine how best to achieve that. Require that X% of the revenue come from users rather than advertisers. Government can monitor progress, with a timetable for ratcheting up the percentage. (This should apply only above some amount of revenues, to facilitate competition.)
With that motivation, Facebook and YouTube can be driven to shift from advertising revenue to customer revenue. That may seem difficult, but only for lack of trying. Credits for attention and data are a just a start. If we move in that direction, we can be less dependent on other, more problematic, kinds of regulation.
This regulatory strategy is outlined in To Regulate Facebook and Google, Turn Users Into Customers (in Techonomy). More on why that is important in Reverse the Biz Model! — Undo the Faustian Bargain for Ads and Data. (And some suggestions on more effective ways to obtain user revenue: Information Wants to be Free; Consumers May Want to Pay, (also in Techonomy.)
SECOND: Your points about limiting user expression, and that the real issue is harmful spreading on social media, are also vitally important.
I say the real issue is not
- rules for what can and cannot be said — speech is a protected right, but
- rules for what statements are seen by who — distribution (how feeds are filtered and presented) is not a protected right
The value of a social media service should be to disseminate the good, not the bad. (That is why we talk about “filter bubbles” — failures of value-based filtering.)
I suggest Facebook and YouTube should have little role in deciding what can be said (other than to enforce government standards of free speech and clearly prohibited speech to whatever extent practical). What matters is who that speech is distributed to, and the network has full control of that. Strong downranking is a sensible and practical alternative to removal — far more effective and nuanced, and far less problematic.
I have written about new ways to use PageRank-like algorithms to determine what to downrank or uprank — “rate the raters and weight the ratings.”
- Facebook can have a fairly free hand in downranking objectionable speech
- They can apply community standards to what they promote — to any number of communities, each with varying standards.
- They could also enable open filtering, so users/communities can chose someone else’s algorithm (or set their preferences in any algorithm).
- With smart filtering, the spread of harmful speech can be throttled before it does much harm.
- The “augmented wisdom of the crowd” can do that very effectively, on Internet scale, in real time.
- No pre-emptive, exclusionary, censorship technique is as effective at scale — nor as protective of free speech rights or community standards.
That approach is addressed at some length in these posts (where “fake news” is meant to include anything objectionable to some community):
- The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings
- A Cognitive Immune System for Social Media — Developing Systemic Resistance to Fake News
…and some further discussion on that:
- Architecting Our Platforms to Better Serve Us — Augmenting and Modularizing the Algorithm
- The Tao of Fake News
- In the War on Fake News, All of Us are Soldiers, Already!)
More of my thinking on these issues is summarized in this Open Letter to Influencers Concerned About Facebook and Other Platforms