Social media icons internet app application
Credit: iStock

It has been just over a century since Justice Oliver Wendell Holmes essentially invented the modern First Amendment by declaring that the “theory of our Constitution” requires government to preserve a “free trade in ideas.” This, Holmes argued, was because “the best test of truth is the power of the thought to get itself accepted in the competition of the market.” 

Social Media and the Public Interest: Media Regulation in the Disinformation Age
by Philip M. Napoli
Columbia University Press, 296 pp.

Holmes was writing in dissent, but it didn’t take long for the idea to catch on. In the 1927 case Whitney v. California, Justice Louis Brandeis joined Holmes in a landmark concurrence, arguing that when American political discourse becomes threatened by the dissemination and consumption of false speech, “the remedy to be applied is more speech, not enforced silence.” 

In the decades that followed, the notion that competition among ideas would correct error and spread truth became a central tenet of First Amendment theory. Conservatives and liberals differed often on what role they thought government should play to ensure that the marketplace of ideas remained open and competitive. But, by and large, there developed a consensus that in an open and competitive speech environment, truth will eventually overcome falsehood and democracy will thereby be served. 

This helps explain why, when the internet first came on the scene in the 1990s, few observers saw it as a threat to democracy, much less to the very concept of truth. Indeed, conventional wisdom predicted just the opposite. The new technology would break down barriers to entry, empowering more people to engage in a global exchange of ideas, overthrowing old superstitions and barbarisms. Remember when people actually believed that Facebook and Twitter would facilitate the spread of democracy and human rights and the toppling of autocracies around the world? 

Those hopes now seem painfully naive. From Russia to Myanmar to Pennsylvania Avenue, social media increasingly looks like a boon, not a threat, to illiberal regimes. What went wrong? And how can we fix it? Those are the questions that Philip M. Napoli, a professor of public policy at Duke University, sets out to answer in his new book, Social Media and the Public Interest. The rise of social media platform monopolies like Facebook and Google, Napoli argues, has created what he calls an “algorithmic marketplace of ideas” that is most notable for the waste products it produces. These include the loss of competition in the truth business caused by algorithmically engineered “filter bubbles” as well as the resulting spread of fake news. This market failure is so deep, Napoli argues, that it cannot be solved by conventional antitrust or other competition policies. 

Instead, he argues, Americans must embrace rigorous regulation of social media platforms so that they are made to serve public purposes. If that means rethinking traditional understandings of the right to free speech, so be it. “Just as it has been asked whether the assumptions underlying the Second Amendment right to bear arms (written in the era of muskets and flintlocks) are transferable to today’s technological environment of high-powered, automatic assault weapons,” Napoli writes, “it may be time to ask whether this fundamental assumption of First Amendment theory, crafted in an era when news circulated primarily via interpersonal contact and print media, are transferrable to today’s radically different media environment.”

Napoli’s analysis rests heavily on the role that corporate-controlled algorithms have come to play in determining how most Americans receive their news. From the 1990s to the mid-2000s, the web was still largely a “pull” medium, in which users actively searched for content and pulled out what interested them rather than passively receiving content through a “feed.” If you wanted to know what a public figure, politician, or writer had to say about a topic, you used your own initiative and a search engine to find the appropriate web page. This early internet, sometimes called Web 1.0, radically democratized access to information and, through blogging, expanded opportunities for ordinary people to share their thoughts and ideas. 

It also, however, led to a fragmented audience that was of little use to advertisers. And so Web 2.0 came to be. Napoli tells the story of how social media platforms, especially Facebook and Google-owned YouTube, finally created products that delivered the massive, passive audience advertisers craved. They did so, overwhelmingly, by running ever-greater quantities of individual user data through algorithms to push out just the kind of content that would reinforce users’ preexisting tastes, attitudes, and ideas—the stuff that is proven to keep people glued to the platforms. Napoli quotes a former Facebook chief technology officer as saying that algorithmic filtering “was always the thing people said they didn’t want but demonstrated they did via every conceivable metric.” By 2017, more than 70 percent of the time people spent watching videos on YouTube was being driven by the “curation” of Google’s algorithms. 

From Russia to Myanmar to Pennsylvania Avenue, social media increasingly looks like a boon, not a threat, to illiberal regimes. What went wrong?

Thanks to the increasingly desperate dependence of news organizations on social media, the biases of these algorithms are passed upstream to the practice of journalism itself. In the old media environment, Napoli explains, information tended to diffuse through a two-step process. Editors used their judgment to select stories that were in turn read by direct news consumers. People who actively followed the news then played the role of “opinion leaders” by passing on information through word of mouth. Today, however, news organizations increasingly feel the pressure to produce content that will be retweeted, liked, or otherwise shared on social media platforms. And the platforms themselves further filter their own news feeds by using algorithms designed to “personalize” the news. The result of both processes is a news environment that confirms readers’ existing prejudices and received ideas as opposed to giving them the truthful information they need to make reasoned judgments about which parties and politicians represent their interests. 

Adding to the dysfunction of today’s media environment, Napoli argues, is its weird combination of monopoly and ruinous competition. He notes that Facebook and Google now thoroughly dominate the market for digital advertising, both nationally and globally. This denies news organizations the revenues they need to support quality journalism, most notably for reporting. Making matters worse, news organizations continue to face intense competition with each other for both eyeballs and the scant advertising dollars that remain, spurring a race to the bottom that rewards parasitically repurposing content created elsewhere and replacing resource-intensive reporting with facile commentary. This ruinous competition, Napoli suggests, “takes the form of a decline in the overall quality of the products being produced to levels that essentially make them incapable of serving their intended purpose—cultivating an informed citizenry.” 

This analysis causes Napoli to give up on the Holmesian idea of an open marketplace of ideas in which the answer to false or dangerous speech is more speech. “[W]e may not be able to rely on counterspeech within the context of social media in the same way that we could with older media,” he writes. Instead, Napoli proposes that we rethink the First Amendment using a more “collectivist approach.” Rather than worry about whether everyone gets a chance to speak, we should worry about whether everything that needs saying gets said: “Within the realm of social media, the First Amendment facilitates a speech environment that is now capable of doing perhaps unprecedented harms to the democratic process, while restricting regulatory interventions that could potentially curb those harms.”

Operationally, what Napoli seems to have in mind is not only holding social media companies accountable for the same libel laws that apply to conventional media outlets, but also regulating them according to how well they serve “the public interest.” As Napoli turns to solutions, he becomes exceedingly vague, but he seems to suggest prohibiting social media platforms from disseminating information that undermines the functioning of democracy. The only specific example he gives, unfortunately, is “fake news that leads to misinformed voting behaviors.” And even there, his prescription is lacking detail. Who decides what is fake news? Napoli does not say. He voices worry about turning that function over to the Trump administration, but still thinks that the state of technology leaves us with no choice but to embrace much deeper government regulation of the press. 

This is a dark vision. Before we embrace the notion that saving democracy requires abridging free speech, perhaps we should consider other tools, such as applying traditional antitrust and other competition policies to ensure that social media is no longer dominated by corporate Goliaths. An even simpler but potentially transformative move would be to outlaw the use of personal data for targeted advertising. That would, at a stroke, undermine the business model that underlies so much of the predatory, addiction-encouraging behavior by Facebook and Google, while also helping the producers of actual journalism recapture some of the advertising revenue they need to survive. If an open marketplace of ideas cannot be preserved, then liberal democracy may already be dead.

Phillip Longman

Phillip Longman is senior editor at the Washington Monthly and policy director at the Open Markets Institute.