Advertisement

Supreme Court hears arguments on social media giants' legal protections

By A.L. Lee & Davis Giangiulio
Two cases before the U.S. Supreme Court could overturn Section 230 of the 1996 Communications Decency Act and throw out longstanding federal protections that keep big tech companies from being sued over content published by users. File Photo by Ken Cedeno/UPI
Two cases before the U.S. Supreme Court could overturn Section 230 of the 1996 Communications Decency Act and throw out longstanding federal protections that keep big tech companies from being sued over content published by users. File Photo by Ken Cedeno/UPI | License Photo

WASHINGTON, Feb. 21 (UPI) -- The U.S. Supreme Court heard oral arguments Tuesday in the first of two cases that could decide whether social media companies can be held liable for promoting incendiary content, including terrorist activities, which have been allowed to widely circulate on the platforms.

The forthcoming rulings could overturn Section 230 of the 1996 Communications Decency Act and throw out longstanding federal protections that keep big tech companies from being sued over content published by independent users.

Advertisement

In both cases, the court's nine justices could fundamentally determine whether the federal statute can still apply if algorithms used by the tech companies are targeting specific users with questionable content while also spreading terrorist influence to a massive digital audience.

The case argued Tuesday, Gonzalez vs. Google, arose out of a lawsuit filed by the family of 23-year-old Nohemi Gonzalez -- an American student who was among 130 people killed in a 2015 Islamic State attack in Paris.

Advertisement

The lawsuit, filed under the Antiterrorism Act, accuses YouTube owner Google of allowing barbaric videos to be posted to the popular platform, which go viral as algorithms recommend the content to random users. For many years, terror groups have recognized the power of social media as a useful recruiting tool, experts say.

A divided 9th Circuit Court of Appeals previously upheld Section 230, saying the statute protects big tech in cases where it has recommended inflammatory content -- so long as the algorithm was being used in the same way for all other content.

In its ruling, however, the lower court acknowledged that Section 230 "shelters more activity than Congress envisioned it would" and suggested that U.S. lawmakers, not the courts, move to clarify the scope of the law.

Upon the decision, the Gonzalez family appealed to the U.S. Supreme Court, which agreed to hear the liability case last year.

Last month, Google filed a brief with the Supreme Court warning against "gutting" the statute, arguing the law's removal would lead to additional censorship and hate speech on the Internet.

Critics say Section 230 protects big tech too much and that companies use it to avoid repercussions for harm done from their platforms or to disguise politically partisan activity.

Advertisement

Gonzalez is the first case that the Supreme Court has heard on this topic, said Georgetown Law Professor Anupam Chander.

"This is unprecedented," he said. "The circuit courts have reached the same conclusions, so there is a uniform interpretation of Section 230 as it currently stands."

Justices took issue with the expansive nature of both sides' positions at oral arguments Tuesday.

Justice Elena Kagan asked Gonzalez counsel Eric Schnapper if excluding algorithms from Section 230 would essentially make the statute useless, as algorithms run so much of the modern Internet. Schnapper responded that what matters is how the algorithm is used and what harm arises from it.

The justices also focused on how to determine when an algorithm is aiding and abetting harm, which Justice Sonia Sotomayor stated is necessary for there to be viable defamation claims.

"There has to be some intent," she said.

Schnapper struggled to respond to that determination but reiterated his point that recommendations are not protected under Section 230.

"They're going to give me a catalog," he said about recommendations. "They created that content."

Meanwhile, the justices questioned defense counsel Lisa Blatt about how far the statute goes. Justice Clarence Thomas asked Blatt if endorsements of content by companies are protected under Section 230, to which she responded that is one's own speech and therefore would likely fail to be protected.

Advertisement

However, she did respond to questioning from other justices that biased algorithms are protected under her interpretation of Section 230. Justices pushed her further, asking whether algorithms that inhibit White people from seeing news about racial justice would be protected from litigation, to which she gave a more uncertain answer.

Other tech companies have come to the defense of Google. Microsoft in an amicus brief said "accepting [Gonzalez's] arguments would wreak havoc on the Internet as we know it."

Meta, the parent company of Facebook, said altering Section 230 would lead to the removal of more content from the platform.

"The floodgates of litigation will open if they are to be held liable for their failures in their recommendations services," Chander said. "Their economically self-interested response will be to remove controversial content that might expose them to liability."

Those concerns were raised during the hearing. The plaintiffs said they thought little litigation would arise if Section 230 were to be weakened, but Blatt gave an explicit warning.

"Congress made that choice to stop lawsuits from stifling the Internet in its infancy," she said about the creation of Section 230. "The Internet would have never got off the ground."

The potentially more heavy-handed moderation will be an economic calculation, Chander said, as companies weigh legal costs versus increased scrutiny of their content regulation decisions.

Advertisement

"It costs less to take down speech," he said. "It's very costly to leave it up."

Efforts to remove or limit the law have been the subject of intense debate in Washington for the past several years.

In April, Rep. Marjorie Taylor Greene, R-Ga., introduced a bill to establish a new law protecting online platforms from liability for user-posted content.

The bill, titled the 21st Century Free Speech Act, would abolish Section 230 and replace it with a "liability protection framework" that would require "reasonable, non-discriminatory access to online platforms" through a "common carrier" framework comparable to telephone, transportation and electric services.

For decades, social media companies have been immune from most civil actions in such cases, although Section 230 suggests that companies have protocols in place to remove objectionable material. Still, the law's reach is much different today than it was in the early days of social media, when Internet business models were largely driven by new subscriptions.

"Now most of the money is made by advertisements, and social media companies make more money the longer you are online," said Schnapper, who also represents victims in Twitter vs. Taamneh, which the high court has agreed to take up on Wednesday.

Advertisement

The latter case could determine whether Twitter, Facebook and Google can be held liable for aiding and abetting international terror groups who have turned to using the platforms to radicalize a new generation of young, impressionable militants.

The Twitter case stems from a federal lawsuit filed by the Taamneh family -- relatives of Nawras Alassaf, a Jordanian national who was among 39 killed in a 2017 terrorist attack in Istanbul.

The family accuses Twitter and the other tech giants of inadequately responding to mutinous content despite being aware that the platforms were being deliberately used to spread disinformation campaigns that serve to glorify bloodshed while inflaming ethnic and religious tensions worldwide.

The cases before the high court this week are similar to a class-action lawsuit filed in Kenya in late 2022 that seeks more than $2 billion from Facebook over accusations the social media giant is profiting from content that promotes ethnic and political violence throughout Africa.

Medill News Service contributed to this report.

Latest Headlines