Internet platforms enjoy a legal status that exempts them from legal liability for their content. Tech giants such as Google, Meta, and TikTok’s parent company, ByteDance, fall under this umbrella under the Communications Decency Act of 1996, specifically Section 230, which states that companies cannot be considered publishers of content on their sites. This difference helped create the widespread modern Internet. But over the past decade, YouTube, Twitter and other platforms have been accused of helping terrorists and insurgents radicalize converts and plot attacks. In a new era, should Internet giants behave like phone companies with no liability for plans built on their networks, or should they face greater liability?
Next month, the Supreme Court will hear two cases that could extend this legal immunity. inside Gonzalez v. Google LLC, the family of Nohemi Gonzalez, who was killed in a 2015 ISIS attack, will argue that Google is responsible not only for allowing the terrorist organization to post videos on YouTube, but also for the algorithm that promotes those videos. inside Twitter, Inc. vs. TaamnehThe family of Nowras Alasaf, who was killed in a 2017 ISIS-affiliated attack in Istanbul, argues that Twitter is responsible for the growth of ISIS because the organization used Twitter to recruit and convert members.
Margaret Hu, a professor at William & Mary Law School and a faculty fellow at Penn State University’s Institute for Computational and Data Sciences, is an expert on national security in the age of social media and cyber surveillance. I spoke with Hu about the upcoming lawsuit and content regulation.
This conversation has been edited and condensed for clarity
GN: Why do you think the court decided to hear these two cases?
MH: The Supreme Court understands the great importance of considering the future of Article 230 in the light of recent events, and they are not immune to the political climate. They can understand [January 6] Questions about hearings and big tech liability.
GN: What worries you most about the potential for court rulings in favor of tech companies?
MH: The biggest concern of many privacy experts is that if blanket immunity is granted in the absence of reform of Section 230, it will be more difficult to hold companies accountable for misinformation and misinformation.
Because that control [internet platforms now] There’s over content—to curate content, drive visibility, and shape our views—it’s not as neutral as serving as a carrier of information. Many of these technology companies have the ability to drive our exposure and interest in content through the algorithms they use.
GN: If judges rule against Google and/or Twitter, what impact might that have on content moderation?
MH: A great concern in this case is what might happen to the First Amendment Part of why Article 230 was passed was to protect freedom of speech. So, I think the complication is, how do you strike that balance? How do you walk the line between First Amendment rights and safety? If you have technology that doesn’t allow platforms to simply host information, then it looks like Section 230 is outdated. It doesn’t really reflect the current situation. And that’s why I think you have more calls for statutory reform or a better interpretation of Article 230 to limit damages.
GN: The Twitter case also involves Section 2333 of the Anti-Terrorism Act, as amended in 2018, which allows US citizens harmed by international terrorism to sue aides and abettors under the Act. How will social media platform liability be affected if Twitter loses?
MH: I think it is important to evaluate what it means to “knowingly” provide “substantial” support to a foreign terrorist organization. That’s at the heart of the question for Twitter. Will that increase in liability affect companies? Yes it will. But it is hoped that it will prevent harmful content in a manner that is responsible and prudent and does not unnecessarily restrict users’ freedom of speech and expression.
GN: Can the Twitter case establish a company’s responsibility for hosting domestic terrorist communications, or is it limited to international terrorism?
MH: Well, it would be limited to international terrorism due to anti-terrorism laws and special cases. You may have members of Congress considering whether those protections need to be expanded to mitigate against domestic terrorism.
It’s a question of how much you’re going to protect against the damage that incites and radicalizes extremists in the United States, which you see with the January 6 committee report and the recent report that didn’t include, the potential liability of social media companies, [and more discussion around] Whether greater regulation is needed to prevent violence. But that is going to be left to address potential legislation.
GN: If these platforms had greater control, do you think it could prevent extremists from organizing like those who attacked the capital on January 6?
MH: A revision of Article 230 will probably steer us towards the kind of confusion and misinformation we tried to mitigate against until January 6.
The SAFE Tech Act is proposed in 2021 [Democratic Senators Mark] Warner, [Mazie] Hirono, and [Amy] Clobuchar Section 230 amends and includes liability for technology companies that have created or funded speech in whole or in part. So, to the extent that you have tech companies trying to spread confusion and misinformation, the question is whether that can be interpreted as “creation” of speech.
GN: Can such laws prevent violent speech by politicians? I think Trump is infamous “There, it will be wild” TweetEncouraging people to protest at the Capitol on January 6. Do you think internet companies will intervene to prevent such tweets from having such an impact?
MH: It’s something that tech companies will say they’re already trying to engage with. I’m not sure I necessarily see direct results flowing from these cases. But I think tech companies realize they have a concern they need to address. I think the courts are grappling with long-term consequences if they don’t try to take the issues seriously.
GN: There are concerns that if Big Tech is forced to moderate certain types of content, the government could become the arbiter of what content is acceptable or not. Do you think these concerns are valid?
MH: Yes, I think you always have to be wary of censorship and having the government intervene as some sort of content moderator. But we have a history of setting tests and guidelines that restrict speech in ways that are seen as constitutionally consistent when the speech incites violence or advocates the overthrow of the government. Given that we have a history of making sure we walk that line, there is precedent for taking the necessary steps to minimize the most harmful impacts.