Last Thursday, Mark Zuckerberg delivered a thirty-five-minute speech at Georgetown University laying out his views on the extent to which Facebook should be an arbiter of speech. In response to growing criticism of Facebook’s failure to sufficiently regulate extremist propaganda on its platform, Zuckerberg defended the social network’s policy of not fact-checking political ads, arguing that he does not believe Facebook should prevent voters from hearing directly from politicians. He attempted to align himself with civil rights leaders like Martin Luther King, Jr. as a champion of free expression and democracy. The irony, however, is that by refusing to prevent misinformation from distorting the political process, Facebook does more to threaten our democracy than to defend and preserve it.
The primary flaw in Zuckerberg’s thinking is his apparent belief that false information from political campaigns plays a valuable role in political discourse—and that Facebook users, rather than Facebook employees, should determine the veracity of political ads. It’s not that Zuckerberg thinks Facebook should not regulate speech, because he clearly does. Facebook takes down terrorist propaganda, pornography, material bullying young people, speech that incites violence, and attempted voter suppression. It’s not that Zuckerberg believes he shouldn’t prevent the spread of misinformation. Facebook uses third-party fact-checkers to help curb false news. Rather, Zuckerberg suggests that lies that come directly from a politician’s campaign are not as dangerous as other problematic content.
In one recent example, a Trump campaign ad made the unsubstantiated claim that Joe Biden “offered Ukraine $1 billion to fire the prosecutor investigating a company affiliated with his son.” CNN refused to run the ad, because it said it “makes assertions that have been proven demonstrably false by various news outlets.” Facebook, however, refused to take the ad down. In a letter, the company said that claims made “directly by a politician on their Page, in an ad or on their website” are exempt from Facebook’s third party fact-checking program, “even if the substance of that claim has been debunked elsewhere.” This line of thinking makes at least one thing clear: Zuckerberg fails to understand how dangerous misinformation is, regardless of its source.
Zuckerberg has employed two main defenses to justify permitting dishonest political ads. The first is that “people should be able to see for themselves what politicians are saying.” Zuckerberg appears to be arguing that it is valuable for voters to know what candidates are saying even if they are lying. To an extent, he is right. It is important for voters to know whether a candidate is telling the truth. The problem is that Facebook’s policy does not help people determine that. Facebook allows lies to be freely disseminated and mistaken for truth. As a result, their policy causes more confusion than clarity.
In fact, when Zuckerberg’s argument is applied in other circumstances, it seems awfully weak. Consider an analogy he should appreciate. Let’s say Zuckerberg were to curate Facebook’s company emails the way Facebook curates content on its platform. Now, imagine there are two candidates vying for an open seat on Facebook’s Board of Directors—one good candidate and one nefarious candidate. Before the meeting, the nefarious one tries to email two current board members who prefer the good candidate, telling them the meeting date has been changed so they don’t show up. Since the email constitutes an attempt at voter suppression, Zuckerberg would intervene to make sure the misleading email is never sent, which is the right thing to do.
But now let’s say that same nefarious candidate sends an email to the entire board making unsubstantiated accusations about the good candidate, including claims that he was accused of sexual misconduct and misusing company funds at his previous job. If even one or two board members believe the accusations, it could unfairly shift the outcome of the election
Under Facebook’s current policy, however, Zuckerberg would permit that misleading email to be sent, so long as the nefarious candidate is treated as a politician. And while there is some value when it is ultimately discovered that the nefarious candidate lied, the harm done by misleading the board about the good candidate greatly outweighs the value of that knowledge. That Zuckerberg would step in to prevent the attempt at voter suppression, but not the spread of false information in this context, is unjustifiable. A better alternative would be for Zuckerberg to prevent the misleading email from being sent, and to punish the offender.
Here’s the second defense Zuckerberg raised: “As a principle, in a democracy, I believe people should decide what is credible, not tech companies.” What’s more, most people do not “want to live in a world where you can only post things that tech companies judge to be 100% true.” Both statements have a certain appeal, but they sidestep the core issues.
Zuckerberg presents a false choice by framing the problem as one in which either ordinary citizens or tech companies get to decide what is credible. In reality, it is not a choice between people and tech companies. It’s between Facebook users and Facebook employees. And it is far better for paid, trained employees to tackle the challenge of determining the veracity of political ads than unpaid, untrained Facebook users. In this role, Facebook is kind of like the Food and Drug Administration, which conducts safety inspections to prevent the spread of foodborne illnesses. Sure, consumers could try to assess the safety of every risky food in the grocery store on their own, but isn’t it better if trained professionals make sure unsafe food never reaches the shelves?
Of course, there is a legitimate concern that Facebook could go too far. It could inadvertently ban truthful political ads. As Zuckerberg himself said, it would be problematic if the bar for having an ad accepted were set too high. But this is not some abstract choice. We’ve already lived with the consequences of Facebook’s hands-off policy.
Russia interfered in our 2016 elections, and the Trump campaign is already exploiting Facebook’s policy to spread more misinformation in this election cycle. Banning political ads with false or unsubstantiated claims is worth the slight risk that an undeserving ad might get taken down, particularly since deserving ads can be reinstated after an appeal to Facebook’s Oversight Board. Zuckerberg should worry more about the very real harm of misinformation than the theoretical harm of censoring accurate content, especially when we’ve already experienced the deleterious effects of the former but have yet to experience the latter.
Throughout his Thursday speech, Zuckerberg was clear about Facebook’s two pronged mission—“give people voice, and bring people together.” In many ways, the company has achieved the second goal. Facebook has been instrumental in helping nonprofits and volunteer organizations fundraise and conduct community outreach . It also helps families stay in touch, businesses attract customers, and users find events they might otherwise not. That said, there is no legitimate role for false information in any of the aforementioned successes.
But the first goal, giving people a voice, is not an accurate description of what Facebook does. Facebook doesn’t give people a voice. They are born with them. Facebook amplifies people’s voices by giving them a platform. And Facebook must draw clear, distinct lines to ensure that the voices and messages amplified on that platform do more good than harm.
Zuckerberg is right to value free expression and to be cautious about regulating speech. But in refusing to fact-check ads from political campaigns, his company is making a mistake. Misinformation can be as dangerous to democracy as voter suppression. By giving false political advertising protection it does not deserve, Facebook is once again putting the integrity of our elections at risk.