Critics Take AIm At Section 230

G
Gamma Law

Contributor

Gamma Law is a specialty law firm providing premium support to select clients in cutting-edge media/tech industry sectors. We have deep expertise in video games and esports, VR/AR/XR, digital media and entertainment, cryptocurrencies and blockchain. Our clients range from founders of emerging businesses to multinational enterprises.
Generative AI can now produce audio and video experiences that are indiscernible from the real thing.
United States Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Generative AI can now produce audio and video experiences that are indiscernible from the real thing. Technology facilitates the digital dissemination of user-generated content (UGC) faster and more ubiquitously than ever. While these developments promise to transform the way humans work, play, learn, and interact, they can also enable antisocial and illegal behaviors.

There is legitimate concern that these two vital components of the Web3 revolution could combine to threaten user privacy and promote bullying, hate speech, human trafficking, and other selfish purposes. The anonymity provided by online aliases, avatars, and blockchain protocols allows villains to emerge from the dark web and hidden corners of the internet. They can now leverage legitimate websites and services to plan and carry out their nefarious schemes.

Content Moderation

This reality has led governments, advocates, and stakeholders to re-examine policies that regulate the placement of UGC on message boards, discussion rooms, social media, and multi-player video game hosts. These platforms strive to balance freedom of expression while maintaining a safe and respectful environment. They aim to protect their brand reputation, ensure positive user experiences, and establish robust response mechanisms for problematic content or conduct.

Navigating the online content moderation landscape, however, poses considerable legal intricacies. The type of content involved, its creation, distribution, and inherent message all bear on moderation decisions. While most jurisdictions provide platforms with some measure of protection from liability stemming from harm caused by hosting — but not generating — content on their sites, they must remain vigilant, proactive, and quick in removing third-party posts that content that infringe upon intellectual property rights, incite violence, contain offensive images, promote intolerance, or spread defamatory statements.

This is especially true in light of recent events that have some people calling for scaling back Section 230 of the US Communications Decency Act of 1996.

Global Approaches

Section 230 shields online platforms from liability for third-party content, granting them broad discretion in content moderation. Specifically, it states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

Europe has been more progressive in regulating UGC. Unlike Section 230, the EU's Digital Services Act requires marketplaces, social networks, content-sharing sites, app stores, and other platforms to ensure the timely removal of illegal content. These sites must also transparently communicate their content moderation practices and redress mechanisms for users who believe they have been harmed. Like most regulatory issues, Europe's more hands-on approach reflects the continent's commitment to safeguarding user rights, while the US emphasizes creativity, freedom, and business development.

Japan has staked out a middle ground. Its Provider Liability Limitation Act mimics Section 230's platform immunity for posting forbidden third-party content while mandating platforms to establish voluntary content moderation and self-regulation guidelines.

Section 230 Under Fire

Section 230 has been a favorite target for members of Congress on both sides of the aisle. Republicans generally make the free speech argument that sites should not be allowed to remove UGC arbitrarily. They claim this right empowers them to dictate political, moral, religious, and other discourse by excluding opinions that they deem biased or noninclusive. Democrats, however, argue that Section 230's near-universal immunity should not apply to sites that do not act swiftly and decisively to remove hate speech, disinformation, and slanderous or libelous posts.

These divergent roads to reform merged in 2023 when Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) introduced legislation proposing a carve-out that would empower harmed individuals and organizations to sue platforms that allow the posting of deepfakes and other AI-generated content. The senators argue that generative AI introduces new risks around misinformation, impersonation, and privacy violations that Section 230 was never intended to protect. If AI can fabricate believable video or text at scale, they contend the country needs updated regulations that incentivize platforms to monitor how this tech is deployed on their sites.

The Hawley/Blumenthal bill, known as the No Section 230 Immunity for AI Act, seeks reform on three fronts:

  1. AI Specificity — Removing immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI so they could be held liable for harmful content produced by AI, such as deepfakes.

  2. Legal Relief — Offering Americans harmed by generative AI models the right to seek financial redress in federal or state court.

  3. Accountability — Specifying that AI companies can be targeted for civil action and, given these liability risks, should take responsibility for their business decisions as they develop products.

Court as Catalyst

The bill may be viewed as a response to recent court cases that upheld platform immunity. Those court decisions did not involve AI; the bill's passage also would provide Congress a mechanism for stripping some of the power giant companies like Meta, Alphabet, and Microsoft wield in the AI market — a comeuppance some believe is past due.

In the latest case to cast the spotlight on Section 230, SCOTUS refused to consider the appeal of a lower court's ruling that Reddit could not be held liable for its users posting child pornography on its platform. Though the Decency Act was amended in 2018 with the Fight Online Sex Trafficking Act (FOSTA), the US Court of Appeals for the 9th Circuit concluded it only applies if an internet company "knowingly benefited" from coercive or child sex trafficking through its own conduct. Simply "turning a blind eye" toward the content is not enough to forfeit immunity, the circuit court decided. Reddit was not involved in "assisting, supporting, or facilitating" the illegal activity.

The Reddit decision came on the heels of two other rulings in media platforms' favor.

  • Twitter, Inc. v. Taamneh — Relatives of a person killed in an ISIS attack in Turkey sued Twitter for allegedly abetting the terrorist organist by allowing it to maintain accounts and disseminate propaganda on the platform, facilitating the radicalization of people who carried out the attack. The plaintiffs argued that Twitter went beyond simply hosting user-generated content and actively promoted and amplified ISIS's messaging. The Supreme Court, however, agreed with Twitter's contention that it was merely a neutral platform and could not be held responsible for the independent actions of those who used its services. The court emphasized that online platforms should not be treated as publishers or speakers of third-party content, which would discourage the free flow of information on the internet. Merely hosting or failing to remove harmful content was not enough to override Section 230 protections, it said. Platforms are only responsible for their own conduct, such as creating, developing, or soliciting unlawful content.

  • Gonzalez v. Google LLC — Earlier, the Supreme Court had sided with Google in a suit brought by the family of an ISIS attack in Paris. The court rejected the plaintiffs' contention that Google's recommendation algorithms went beyond passive hosting and constituted active curation and promotion of ISIS's messaging, nullifying its immunity. The Court reasoned that the algorithms were essentially tools for organizing and displaying user-generated content and that holding platforms liable for the output of such neutral tools would undermine the purpose of Section 230, which was to promote the free flow of information on the internet.

Section 230 Seems Safe for Now

The high court seems content to leave social media platform protections in place. While decisions in a few potentially game-changing cases are pending, successive platform victories point to continued immunity. Still, any business or other organization planning on featuring user-generated content should have their business model reviewed by an experienced Web3 attorney.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More