Social media icons internet app application
Credit: iStock

On Saturday, I wrote a piece on the various ways that Facebook is destroying both journalism and democracy. Today there is a story in the Washington Post about a new study conducted by Facebook itself on vaccine disinformation on its platform. The main upshot is that a small number of users on Facebook are responsible for the majority of the disinformation.

Facebook also wants you to know that they are not only putting resources into studying the problem, but that they are also taking active steps to try to confront it by limiting content and banning users who actively disseminate lies and conspiracy theories about vaccines. This is good PR for the company, of course. It’s also quite likely that, as in any large organization, there are people trying to do the right thing and change the world for the better as part of their jobs.

But both framing of the problem itself and Facebook’s response to it only reinforce the problems with Facebook’s underlying business model.

First, consider the fact that a relatively small number of people are able to generate mistrust in a much larger universe:

Some of the early findings are notable: Just 10 out of the 638 population segments contained 50 percent of all vaccine hesitancy content on the platform. And in the population segment with the most vaccine hesitancy, just 111 users contributed half of all vaccine hesitant content.

Facebook could use the findings to inform discussions of its policies for addressing problematic content or to direct more authoritative information to the specific groups, but the company was still developing its solution, spokeswoman Dani Lever said.

Facebook can’t do anything about the fact that there are large pockets of angry, low-trust conspiracy theorists online. But Facebook could do something about the fact that those pockets have disproportionate impact on a larger universe, because Facebook tailors its engagement algorithms to promote the most controversial voices. It is no surprise at all that a small number of obnoxious people on Facebook are impacting the opinions of much larger groups on Facebook. That’s what Facebook is designed to do! The platform is deliberately designed to get people to see exciting opinions and either argue with the opinion-maker–or to make them delve deeper into the rabbit hole and form likeminded closed communities around their (often newfound) belief systems.

It’s also important to point out that while there are left-leaning conspiracists and anti-vax groups, the vast majority of the low-trust conspiracy content on the platform is right-wing. And even as the company was taking flak for promoting socially destructive content, it was also trying to fend off criticism from Republicans of “bias” against them for sideling said content. Facebook had the choice to say to Republican electeds that it wasn’t their fault that Republican politicians increasingly promoted falsehoods and conspiracies while relying on a disinformation cult for votes. But Mark Zuckerberg chose instead to pander to Trump and the GOP. You cannot try to placate Fox News and Newsmax while also promoting socially responsible information. The two goals are mutually exclusive.

Facebook’s response to the problem has been to belatedly try to quash anti-vax, QAnon and other socially destructive content. But this is like closing the barn door after the horse has already escaped. It also fails to get to the heart of the problem. Like a mutating virus, hate speech and conspiracy theories are constantly evolving. Efforts to stifle them only after they appear is a Sisyphean task:

The problem is that for all Zuckerberg’s promises, this strategy is tenuous at best.

Misinformation and hate speech constantly evolve. New falsehoods spring up; new people and groups become targets. To catch things before they go viral, content-moderation models must be able to identify new unwanted content with high accuracy. But machine-learning models do not work that way. An algorithm that has learned to recognize Holocaust denial can’t immediately spot, say, Rohingya genocide denial. It must be trained on thousands, often even millions, of examples of a new type of content before learning to filter it out. Even then, users can quickly learn to outwit the model by doing things like changing the wording of a post or replacing incendiary phrases with euphemisms, making their message illegible to the AI while still obvious to a human. This is why new conspiracy theories can rapidly spiral out of control, and partly why, even after such content is banned, forms of it can persist on the platform.

Facebook’s approach to hate speech and disinformation is similar to capitalism’s approach to climate change: let the problem show up, and then slowly try to maybe address it afterward with superficial measures while doing nothing about the fundamental causes. Facebook can no more address disinformation under its current business model than unregulated capitalism can fix climate change. The model itself is causing the problem, and any measures taken to address it post facto are like Band Aids on a tearing wound.

The only way for Facebook to solve this problem is to limit the degree to which controversially engaging content by angry users is prioritized in its algorithms. They don’t want to do that. They also don’t want to anger the conservative parts of their userbase who are most angrily reliant on disinformation. Doing either would cut into their profits and expectations of growth.

As long as they refuse to address the fundamental flaw in their business model, not number of detailed studies or content bans will ever be adequate.

David Atkins

Follow David on Twitter @DavidOAtkins. David Atkins is a writer, activist and research professional living in Santa Barbara. He is a contributor to the Washington Monthly's Political Animal and president of The Pollux Group, a qualitative research firm.