The Dis-information Choke Point: Dis-tribution (Not Supply or Demand) [Stub]
Demand for Deceit: How the Way We Think Drives Disinformation, is an excellent report from the National Endowment for Democracy (by Samuel Woolley and Katie Joseff, 1/8/20). It highlights the dual importance of both supply and demand side factors in the problem of disinformation (fake news). That crystallizes in my mind an essential gap in this field — smarter control of distribution. The importance of this third element that mediates between supply and demand was implicit in my comments on algorithms (in section #2 of the prior post).
[This is a stub for a fuller post yet to come. (It is an adaptation of a brief update to my prior post on Regulating the Platforms, but deserves separate treatment.)]
There is little fundamentally new about the supply or the demand for disinformation. What is fundamentally new is how disinformation is distributed. That is what we most urgently need to fix. If disinformation falls in a forest… but appears in no one’s feed, does it disinform?
In social media a new form of distribution mediates between supply and demand. The media platform does filtering that upranks or downranks content, and so governs what users see. If disinformation is downranked, we will not see it — even if it is posted and potentially accessible to billions of people. Filtered distribution is what makes social media not just more information, faster, but an entirely new kind of medium. Filtering is a new, automated form of moderation and amplification. That has implications for both the design and the regulation of social media.
Controlling the choke point
By changing social media filtering algorithms we can dramatically reduce the distribution of disinformation. It is widely recognized that there is a problem of distribution: current social media promote content that angers and polarizes because that increases engagement and thus ad revenues. Instead the services could filter for quality and value to users, but they have little incentive to do so. What little effort they ever have made to do that has been lost in their quest for ad revenue.
Social media marketers speak of “amplification.” It is easy to see the supply and demand for disinformation, but marketing professionals know that it is amplification in distribution that makes all the difference. Distribution is the critical choke point for controlling this newly amplified spread of disinformation. (And as Feld points out, the First Amendment does not protect inappropriate uses of loudspeakers.)
While this is a complex area that warrants much study, as the report observes, the arguments cited against the importance of filter bubbles in the box on page 10 are less relevant to social media, where the filters are largely based on the user’s social graph (who promotes items to be fed to them, in the form of posts, likes, comments, and shares), not just active search behavior (what they search for).
Changing the behavior of demand is clearly desirable, but a very long and costly effort. It is recognized that we cannot stop the supply. But we can control distribution — changing filtering algorithms could have significant impact rapidly, and would apply across the board, at Internet scale and speed — if the social media platforms could be motivated to design better algorithms.
How can we do that? A quick summary of key points from my prior posts…
We seem to forget what Google’s original PageRank algorithm had taught us. Content quality can be inferred algorithmically based on human user behaviors, without intrinsic understanding of the meaning of the content. Algorithms can be enhanced to be far more nuanced. The current upranking is based on likes from all of one’s social graph — all treated as equally valid. Instead, we can design algorithms that learn to recognize the user behaviors on page 8, to learn which users share responsibly (reading more than headlines and showing discernment for quality) and which are promiscuous (sharing reflexively, with minimal dwell time) or malicious (repeatedly sharing content determined to be disinformation). Why should those users have more than minimal influence on what other users see?
The spread of disinformation could be dramatically reduced by upranking “votes” on what to share from users with good reputations, and downranking votes from those with poor reputations. I explain further in A Cognitive Immune System for Social Media — Developing Systemic Resistance to Fake News and In the War on Fake News, All of Us are Soldiers, Already! More specifics on designing such algorithms is in The Augmented Wisdom of Crowds: Rate the Raters and Weight the Ratings. Social media are now reflecting the wisdom of the mob — instead we need to seek the wisdom of the smart crowd. That is what society has sought to do for centuries.
Beyond that, better algorithms could combat the social media filter bubble effects by applying measures that apply judo to the active drivers noted on page 8. Cass Sunstein suggested “surprising validators” in 2012 one way this might be done, and I built on that to explain how that could be applied in social media algorithms: Filtering for Serendipity — Extremism, ‘Filter Bubbles’ and ‘Surprising Validators’.
If platforms and regulators focused more on what such distribution algorithms could do, they might take action to make that happen (as addressed in Regulating our Platforms — A Deeper Vision).
Yes, “the way we think drives disinformation,” and social media distribution algorithms drive how we think — we can drive them for good, not bad!
[Re-posted from my blog, Smartly Intertwingled — see updates on that version.]
See the Selected Items tab of my blog for more on this theme.