This week is Tech Week at the Supreme Court, at least unofficially. The justices returned from a month-long break to hear oral arguments on Tuesday in Gonzalez v. Google, the first of two back-to-back cases about Silicon Valley and legal liability. At stake is whether internet companies both large and small can be hauled into court for things that other people post on their websites.
At the center of the case is something colloquially known as Section 230. The 26-word provision of the Communications Decency Act of 1996 states that a website’s creator “shall [not] be treated as the publisher or speaker of any information” posted by a third party. In other words, companies like Facebook and Twitter can’t generally be sued for actionable things that other people post on their services. Silicon Valley has credited Section 230 with the internet’s explosive growth and widespread adoption over the past three decades.
After three hours of arguments the justices appeared unwilling to broadly rewrite the provision. Justice Elena Kagan quipped at one point that she and her colleagues were “not the nine greatest experts on the internet.” But the justices also struggled at times to get clarity on the precise argument being made by the plaintiffs, who generally want the court to allow more legal challenges to companies like Google.
Eric Schnapper, a University of Washington law professor who argued on behalf of the plaintiffs, said Section 230 had been outpaced by technological developments since 1996. “The statute was written to address one or two very specific problems about defamation cases, and it drew lines around certain kinds of things and it protected those,” he told the justices. “It did not and could not have been written in such a way to protect everything else that might come along that was highly desirable.”
Recommendation algorithms, which many websites use to offer content to their users, are at the center of the case. But exactly when and how these features cross the liability line was less than clear to the justices. “I’m afraid I’m completely confused by whatever argument you are making,” Justice Samuel Alito told Schnapper after an extended discussion about whether thumbnail images would increase Google’s exposure to potential lawsuits. Justice Ketanji Brown Jackson later added that she was “thoroughly confused” by the plaintiffs’ apparent conflation with arguments in the related case being heard on Wednesday.
Some later laws passed by Congress have carved out explicit exceptions to Section 230, most notably for sex trafficking in 2018. The family of Nohemi Gonzalez argued that there was an implicit exception as well for victims of terrorism. In 2015, ISIS-aligned militants carried out a series of attacks across Paris that killed 130 people and wounded more than 400 others. Gonzalez, a 23-year-old American college student who was studying abroad in France, was among those killed in the attacks.
Her family filed a lawsuit against Google the following year under the federal Anti-Terrorism Act. That law allows Americans to bring claims against individuals or groups that knowingly provide “material assistance” to acts of international terrorism. According to the Gonzalez family, Google contributed to the 2015 attacks by allowing ISIS and its sympathizers to post recruitment videos on its subsidiary YouTube and by letting its recommendation algorithms distribute those videos to a much broader audience.
Google denied any wrongdoing, noting that the Gonzalez family did not offer any proof that YouTube videos were actually used to facilitate or organize the Paris attack. (The Gonzalez family did note that one of the attackers was a frequent YouTube user and had appeared in a recruitment video there.) Its recommendation algorithms, the company claimed, respond to user inputs like a search engine would instead of reflecting YouTube’s own editorial preferences. It also argued that Section 230 would nonetheless insulate the company from legal action for the content that it had formerly hosted.
That interpretation persuaded judges in the Ninth Circuit Court of Appeals, where a three-judge panel sided with Google in deference to precedent in 2021 but also expressed misgivings about the law’s scope. They are hardly alone. As I noted in October, a growing number of Democrats and Republicans have criticized Section 230 in recent years for giving too much leeway to internet companies. Among them are both President Joe Biden, who says the companies haven’t done enough to curb extremist content on their platforms, and former President Donald Trump, who has complained that the companies are supposedly censoring him and other conservative voices. (The irony here should not be overlooked.)
Lisa Blatt, who represented Google, warned the court that adopting the plaintiffs’ theory about algorithms could upend the internet as most users know it. “Helping users find the proverbial needle in the haystack is an existential necessity on the Internet,” she told the justices. “Search engines thus tailor what users see based on what’s known about users. So does Amazon, Tripadvisor, Wikipedia, Yelp!, Zillow, and countless video, music, news, job-finding, social media, and dating websites. Exposing websites to liability for implicitly recommending third-party context defies the text [of Section 230] and threatens today’s Internet.”
That drew pushback from a few justices, who questioned when recommendation and curation went beyond simply hosting third-party material. “The [ISIS] videos just don’t appear out of thin air,” Chief Justice John Roberts told Blatt. “They appear pursuant to the algorithms that your clients have. And those algorithms must be targeted to something.” Blatt countered that the same could be said of search engines in general. “You could call all of them a recommendation that are tailored to the user because all search engines take user information into account,” she noted, pointing to the different things that appear when American and European users enter “football” into Google.
At least a few justices appeared interested in construing Section 230 in a way that wouldn’t affect most recommendation algorithms but also narrow its scope in other instances. “As I see it, we have a few options,” Justice Neil Gorsuch suggested at one point during oral arguments. “We could say that YouTube does generate its own content when it makes a recommendation—says ‘up next.’ We could say no, that’s more like picking and choosing. Or, we could say the Ninth Circuit’s neutral-tools test was mistaken because, in some circumstances, even neutral tools like algorithms can generate, through artificial intelligence, forms of content.”
Others, however, expressed concern about ruling that some forms of algorithm-based curation are outside Section 230’s scope. “Your position, I think, would mean that the very thing that makes the website an interactive computer service also mean that it loses the protection of [Section] 230,” Justice Brett Kavanaugh remarked to Schnapper. “And just as a textual and structural matter, we don’t usually read a statute to, in essence, defeat itself.”
The Supreme Court will get a chance to explore another aspect of the issue: whether tech companies are liable at all, regardless of Section 230, under federal anti-terrorism laws when ISIS and its allies used their services in the 2010s. That case, Twitter v. Taamneh, is based on similar facts: the family of a man who was killed in a 2017 attack on a Turkish nightclub sued Twitter for “aiding and abetting” the attack by not doing enough to suppress ISIS material on its platform. Section 230 won’t be a direct issue in that case. But the uncertainty over where to draw a line and the confusion over the plaintiffs’ exact arguments during oral argument in Gonzalez on Tuesday may lead the justices to ask whether both cases can be resolved on similar grounds in Taamneh.