The "Stop the Censorship Act" modifies Section 230(c) of the Communications Act to narrow liability protections for online platforms, focusing on unlawful material and user options to restrict access to content.
Paul Gosar
Representative
AZ-9
The "Stop the Censorship Act" seeks to narrow the scope of Section 230 of the Communications Act of 1934, modifying the protections for online platforms regarding user-generated content. It redefines protected material as "unlawful material" and includes actions taken to restrict access to other material, regardless of constitutional protection, within the scope of the Act. These changes aim to revise the immunities provided to online platforms concerning content moderation.
The "Stop the Censorship Act" aims to significantly change Section 230 of the Communications Act of 1934, the key law that shields online platforms from liability for user-generated content. This bill shifts the focus from protecting platforms' moderation of "objectionable" material to only protecting them from liability for "unlawful" material. This seemingly small wording change could have a big impact.
The core change is swapping "obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable" in Section 230(c)(1)(A)(i) with the much narrower term "unlawful material." This means platforms could be held legally responsible for a much wider range of user posts. The original language, while broad, allowed platforms to remove harmful, but not necessarily illegal, content without fear of lawsuits. This bill takes that leeway away.
Imagine a local business owner dealing with a flood of fake, negative reviews on their online page, posted by a competitor. Under current Section 230, the platform could remove these reviews as "harassing" or "objectionable" without legal risk. With the "Stop the Censorship Act," that same platform might hesitate. Unless those reviews are provably illegal (like defamation, which requires proving specific false statements and intent), the platform might leave them up to avoid lawsuits. This could create a chilling effect, where platforms err on the side of caution and allow harmful content to remain online. Or, imagine a coder who created a new social media app. They are now faced with continuous legal risks, and if they can't afford sophisticated lawyers, they may shut down their platform altogether.
The bill also adds a new section, subparagraph (C), allowing platforms to give users tools to restrict access to any material, "whether or not such material is constitutionally protected." While seemingly empowering, this could lead to users creating echo chambers, blocking not just harmful content, but also diverse viewpoints or legitimate news.
This bill is a direct response to the ongoing debate about online censorship and platform power. However, by narrowing the scope of Section 230 protection, it could make the internet a much riskier place for both platforms and users. The shift to "unlawful" material introduces a significant legal challenge. What counts as "unlawful" can be complex and vary by jurisdiction. This ambiguity could lead to a surge in lawsuits, forcing platforms to either drastically restrict user content or face constant legal battles. It also raises questions about how smaller platforms, without vast legal resources, will navigate this new landscape. The bill's changes, while seemingly aimed at promoting free speech, could paradoxically lead to more censorship as platforms try to minimize their legal risk.
The "Stop the Censorship Act" isn't just tweaking existing rules; it's fundamentally changing the game. The shift to "unlawful material" as the standard for liability protection could reshape how the internet operates, impacting everyone from small business owners to software developers, and ultimately, anyone who uses online platforms.