“Today’s decisions should be commended for recognizing that the rules we apply to the internet should foster free expression, not suppress it,” said the deputy director of ACLU’s National Security Project.
Civil liberties advocates on Thursday praised the U.S. Supreme Court for a pair of unanimous rulings that they say uphold the right to free speech on online platforms.
Alongside its partners, the ACLU “filed amicus briefs in both cases urging the court to ensure online platforms are free to promote, demote, and recommend content without legal risk in order to protect political discourse, cultural development, and intellectual activity,” the group noted in a statement.
“Free speech online lives to fight another day,” said Patrick Toomey, deputy director of ACLU’s National Security Project. “Twitter and other apps are home to an immense amount of protected speech, and it would be devastating if those platforms resorted to censorship to avoid a deluge of lawsuits over their users’ posts. Today’s decisions should be commended for recognizing that the rules we apply to the internet should foster free expression, not suppress it.”
According to ACLU’s statement:
In Twitter v. Taamneh, the plaintiffs claimed that Twitter was liable for allegedly “aiding and abetting” an attack in Istanbul by ISIS because Twitter failed to adequately block or remove content promoting terrorism — even though it had no specific knowledge that any particular post furthered a terrorist act. The court held that hosting, displaying, and recommending videos, without more, is not aiding and abetting terrorism.
As the ACLU’s amicus brief in Twitter v. Taamneh explained, if the Supreme Court allowed the 9th U.S. Circuit Court of Appeals’ startlingly broad interpretation of the Anti-Terrorism Act to stand, online intermediaries—like internet service providers, social media platforms, publishers, and other content distributors—would be forced to suppress the First Amendment-protected speech of many of their users. The brief explained that, given the vast scale of speech occurring on platforms like Twitter every day, online intermediaries would be compelled to use blunt content moderation tools that over-restrict speech by barring certain topics, speakers, or types of content in order to avoid claims that they went too far in making that information available to an interested audience. Even today, platforms frequently take down content mistakenly identified as offensive or forbidden, for example, by confusing a post about a landmark mosque with one about a terrorist group.
In Gonzalez v. Google, the court noted that in light of its decision in Twitter v. Taamneh, “little if any” of the plaintiffs’ case remained viable. It was therefore unnecessary to address the question of whether Section 230 of the Communications Decency Act immunized the platform’s recommendation algorithms. The court remanded the case to the 9th U.S. Circuit Court of Appeals to determine whether any part of the plaintiffs’ argument could move forward in light of the Twitter ruling.
David Greene, director of civil liberties at the Electronic Frontier Foundation (EFF), also welcomed the court’s rulings in both cases.
EFF is “pleased that the court found that an online service cannot be liable for terrorist attacks merely because their services are generally used by terrorist organizations the same way they are used by millions of organizations around the globe,” Greene said in a statement.
He added that EFF is “pleased that the court did not address or weaken Section 230, which remains an essential part of the architecture of the modern internet and will continue to enable user access to online platforms.”
Section 230 is a federal liability shield that generally prevents social media and other websites from facing defamation lawsuits or being held accountable for third-party content generated by users or paid advertisers. The immunity provision has come under increased scrutiny from many members of Congress in both major parties.
One countervailing opinion about the court’s decision to not reexamine Section 230 came from the Real Facebook Oversight Board, a coalition of researchers and advocates who seek to counter the harms associated with the profit-maximizing algorithms used by Facebook and Instagram, both of which are now owned by Meta.
“Meta wasn’t on trial today in the Supreme Court, but their rapacious business model was,” the group said in a statement. “In no surprise, the extremist U.S. Supreme Court chose profit over privacy and safety. More than ever, U.S. lawmakers must act to pass sweeping, meaningful regulation of Big Tech—before more users are harmed or worse by hate speech that platforms won’t and can’t stop.”
Sen. Ron Wyden (D-Ore.), however, echoed the assessment shared by the ACLU and EFF, calling the court’s decision to leave Section 230 untouched “good news.”
“Despite being unfairly scapegoated for everything wrong with the internet, Section 230 remains vitally important to protecting online speech,” argued Wyden, who co-wrote the 1996 statute with former Rep. Chris Cox (R-Calif.). “My focus remains helping end abusive practices by tech companies while protecting freedom of information online.”
According to Politico, the high court’s decisions “mark a major win for the tech industry, which has argued that narrowing Section 230 could be disastrous for the internet if platforms could be sued over content-moderation decisions. But the resolution leaves the door open to future showdowns—potentially in Congress—over the breadth of the legal protection the internet firms enjoy.”
This work is licensed under Creative Commons (CC BY-NC-ND 3.0).