Margaret Hu, William & Mary Law School

Internet platforms have enjoyed a legal status that exempts them from legal liability for their content. Tech giants such as Google, Meta, and TikTok’s parent company, ByteDance, have been under this umbrella thanks to the Communications Decency Act of 1996, specifically its Section 230, establishing that companies cannot be treated as the publisher of the content on their sites. This distinction helped create the sprawling modern internet. But in the last decade, YouTube, Twitter, and other platforms have been accused of abetting terrorists and insurrectionists to radicalize converts and plan attacks. In a new era, should internet giants be treated like phone companies with no liability for plans hatched on their networks, or should they face greater accountability?

Next month, the Supreme Court will hear two cases that could upend this legal immunity. In Gonzalez v. Google LLC, the family of Nohemi Gonzalez, killed in a 2015 ISIS attack, will argue that Google is liable not only for allowing the terrorist organization to post videos on YouTube but also for its algorithm promoting those videos. In Twitter, Inc. v. Taamneh, the family of Nawras Alassaf, killed in a 2017 ISIS-affiliated attack in Istanbul, argues Twitter is liable for the growth of ISIS because the organization used Twitter to recruit members and proselytize.

Margaret Hu, a professor at William & Mary Law School and a faculty fellow with the Institute for Computational and Data Sciences at Penn State University, is an expert in national security in the age of social media and cyber surveillance. I spoke with Hu about the upcoming cases and content regulation.

This conversation has been edited and shortened for clarity.

GN: Why do you think the Court decided to hear these two cases?

MH: The Supreme Court understands the great importance of weighing in on the future of Section 230 in light of recent developments, and they are not immune to the political climate. They understand the [January 6] hearings and questions about holding big tech accountable.

GN: What concerns you most about the possibility of the court ruling in favor of the tech companies?

MH: The greatest concern of many privacy experts is that if blanket immunity is granted in the absence of reform to Section 230, there will be greater difficulty holding the companies accountable for spreading disinformation and misinformation.

Because of the control that [internet platforms now] have over content—to curate the content, drive visibility, and shape our views—it’s not as neutral as serving as a conduit of information. A lot of these tech companies have the power to drive our exposure and our interest in the content through the algorithms they use.

GN: If the judges rule against Google and/or Twitter, what impact could that have on content moderation?

MH: One of the great concerns about this case is what can happen to the First Amendment. Part of why Section 230 was passed was to preserve the freedom of speech. So, I think the complexity is, how do you strike that balance? How do you walk the line between First Amendment rights and safety? If you have technologies that allow the platforms not simply to host information, then it seems like Section 230 is outdated. It doesn’t really reflect the current situation. And this is why I think you have more calls for statutory reform or a better interpretation of Section 230 to limit the harms.

GN: The Twitter case also involves Section 2333 of the Anti-Terrorism Act, amended in 2018, which lets U.S. nationals harmed by international terrorism sue anyone who aided and abetted in the act. How will social media platforms’ liability be impacted if Twitter loses?

MH: I think that what’s critical is to assess the meaning of “knowingly” offering “substantial” support to a foreign terrorist organization. That is at the heart of the question in the Twitter case. Is that going to impact companies if that liability is extended? Yes, it would. But the hope would be that it would restrain harmful content in a way that’s responsible and careful and does not unnecessarily limit the freedoms of speech and expression of users.

GN: Could the Twitter case establish the company’s responsibility for hosting domestic terrorist communications, or is this limited to international terrorism?

MH: Well, because of the Anti-Terrorism Act and the particular case, this would be limited to international terrorism. You might have members of Congress considering whether it’s necessary to extend those protections to mitigate against domestic terrorism.

It’s a question of to what extent you’re going to shield against harms that inflame and radicalize extremists in the U.S. You see with the January 6 Committee report and the recent report that wasn’t included, the potential liability of social media companies, [and more discussion around] whether there needs to be greater regulation to prevent violence. But that’s going to be left to potential statutes to address.

GN: If there were greater regulation of these platforms, do you think this could prevent extremists like those who stormed the capital on January 6 from organizing?

MH: A revision of Section 230 would possibly move us in the direction of trying to mitigate against the type of disinformation and misinformation that we saw leading to January 6.

The SAFE Tech Act proposed in 2021 by [Democratic Senators Mark] Warner, [Mazie] Hirono, and [Amy] Klobuchar to modify and revise Section 230 included liability to tech companies that in whole or in part created or funded the creation of the speech. So, to the extent you have the tech companies trying to feed upon disinformation and misinformation campaigns, the question is whether that can be construed as the “creation” of the speech.

GN: Could statutes like this deter violent speech from politicians? I think of Trump’s infamous be there, will be wildtweet, encouraging people to protest at the Capitol on January 6. Do you think internet companies would intervene to prevent tweets like this from having such an impact?

MH: That is something that the tech companies would say they are already trying to engage. I’m not sure I necessarily see that direct result flowing from these cases. But I think that the tech companies understand there is a concern they need to address. I think courts also are grappling with the long-term consequences if they don’t try to take the issues seriously.

GN: There is concern that forcing big tech to moderate certain types of content could lead to the government being arbiters of what content is or is not admissible. Do you think this concern is legitimate?

MH: Yeah, I think you always have to be careful about censorship and having the government intervene as some type of content moderation arbiter. But we have a history of setting up tests and guidelines that restrict speech in ways that are seen as constitutionally consistent when speech incites violence or advocates the overthrow of the government. Given that we do have a history of ensuring that we walk that line, there is precedent for taking the steps necessary to minimize some of the most detrimental impacts.

Our ideas can save democracy... But we need your help! Donate Now!

Gabrielle Nadler is an intern at the Washington Monthly. Follow her at @GabrielleNad1er.