Addressing the Human Problem With Social Media

The recent Mueller indictments have added fuel to the discussion about how social media, especially Facebook, played a role in the 2016 election. As I noted yesterday, Evan Osnos raised a question that, in our attempt to hold social media platforms accountable, hasn’t gotten a lot of attention.

The power of news illiteracy. At the heart of the Russian fraud is an essential, embarrassing insight into American life: large numbers of Americans are ill-equipped to assess the credibility of the things they read. The willingness to believe purported news stories, often riddled with typos or coming from unfamiliar outlets, is a liability of today’s fragmented media and polarized politics.

I think he has a point about news illiteracy. But it goes way beyond that. Here is how Roger McNamee described what is happening in his article for the most recent edition of the Washington Monthly:

Whenever you log into Facebook, there are millions of posts the platform could show you. The key to its business model is the use of algorithms, driven by individual user data, to show you stuff you’re more likely to react to. Wikipedia defines an algorithm as “a set of rules that precisely defines a sequence of operations.” Algorithms appear value neutral, but the platforms’ algorithms are actually designed with a specific value in mind: maximum share of attention, which optimizes profits. They do this by sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.

Algorithms that maximize attention give an advantage to negative messages. People tend to react more to inputs that land low on the brainstem. Fear and anger produce a lot more engagement and sharing than joy. The result is that the algorithms favor sensational content over substance. Of course, this has always been true for media; hence the old news adage “If it bleeds, it leads.” But for mass media, this was constrained by one-size-fits-all content and by the limitations of delivery platforms. Not so for internet platforms on smartphones. They have created billions of individual channels, each of which can be pushed further into negativity and extremism without the risk of alienating other audience members. To the contrary: the platforms help people self-segregate into like-minded filter bubbles, reducing the risk of exposure to challenging ideas.

Here is how Tristan Harris, former Google executive and co-founder of the Center for Humane Technology, put it during an interview with Ezra Klein:

Ezra Klein: I had Jaron Lanier on this podcast a couple months ago, and he said something I’ve been thinking about since then. He said that the key to a lot of social media is [that] negative emotions engage more powerfully than positive emotions. Do you think he’s right about that?

Tristan Harris: Oh, absolutely. Outrage just spreads faster than something that’s not outrage.

When you open up the blue Facebook icon, you’re activating the AI, which tries to figure out the perfect thing it can show you that’ll engage you. It doesn’t have any intelligence, except figuring out what gets the most clicks. The outrage stuff gets the most clicks, so it puts that at the top.

Frankly, it isn’t just social media that operates this way. I am reminded of something David Frum wrote back in 2010 about right wing media.

I’ve been on a soapbox for months now about the harm that our overheated talk is doing to us. Yes it mobilizes supporters – but by mobilizing them with hysterical accusations and pseudo-information, overheated talk has made it impossible for representatives to represent and elected leaders to lead. The real leaders are on TV and radio, and they have very different imperatives from people in government. Talk radio thrives on confrontation and recrimination. When Rush Limbaugh said that he wanted President Obama to fail, he was intelligently explaining his own interests. What he omitted to say – but what is equally true – is that he also wants Republicans to fail. If Republicans succeed – if they govern successfully in office and negotiate attractive compromises out of office – Rush’s listeners get less angry. And if they are less angry, they listen to the radio less, and hear fewer ads for Sleepnumber beds.

While this is a much bigger issue for right wing media, there are left wing sites that I avoid because too many times I’ve clicked on salacious headlines only to find that the facts don’t back up the outrage.

During 2016, there were countless headlines that described the election as one that was all about anger. Given what we know now, we need to ask how much of that was organic among voters and how much was fueled by social media in particular and partisan media in general. The candidates the Russians chose to support on social media—Trump and Sanders—were the two that relied primarily on fueling outrage in the population.

Frum nailed what is behind all of this: the hunt for advertising dollars. I once attended a talk by Bill Doherty, professor in the Family Social Science Department at the University of Minnesota. He began by asking the audience, “What is the goal of television?” If you answered anything related to entertainment, you’d be wrong. The real goal is to produce eyeballs for advertisers. That is true of social media and, as all of the people above described, the best way to do that is to appeal to people’s outrage. That is not simply a media problem. It is a human problem that the media exploits.

I believe that this is the central issue we face when it comes to meaningful political discourse in this country. What we have is a situation where people both seek out and are fed information that fuels their anger and keeps them locked in bubbles that reinforce their existing viewpoints. But it basically comes down to a chase for advertising dollars via the promotion of anger.

Right now most of the solutions being put forward to fix social media have to do with generating ideas that open up those platforms. McNamee has a helpful list in his article on how to do that (something that is often lacking in the discussion). But it is hard for me to imagine those ever being truly effective as long as revenue and profits are dependent on providing clicks for the advertisers.

A much more daunting solution would be to develop more informed social media consumers. Much like everyone else, I’m not sure how we go about doing that. But I tend to go back to the fact that Barack Obama won the presidential election in 2008 on a message of “hope and change” during the depths of the Great Recession, when voters might have been the most susceptible to outrage. I suspect that Marshall Ganz identified why.

How do organizers master urgency to break through inertia? The difference in how individuals respond to urgency or anxiety (detected by the brain’s surveillance system) depends on the brain’s dispositional system, the second system in the brain, which runs from enthusiasm to depression, from hope to despair. When anxiety hits and you’re down in despair, then fear hits. You withdraw or strike out, neither of which helps to deal with the problem. But if you’re up in hope or enthusiasm, you’re more likely to ask questions and learn what you need to learn to deal with the unexpected.

Hope is not only audacious, it is substantial. Hope is what allows us to deal with problems creatively. In order to deal with fear, we have to mobilize hope.

While we sift through ideas about how to improve social media platforms, it is equally important for those of us interested in solving this problem to think about how we can generate hope and enthusiasm in response to anger, rather than anxiety and fear. When Facebook programs its algorithms to those positive reactions because they generate clicks for the advertisers, we will have tackled the human problem associated with social media.

Nancy LeTourneau

Nancy LeTourneau is a contributing writer for the Washington Monthly.