The COLLUDE Act amends Section 230 of the Communications Act of 1934, removing liability protections for online platforms that restrict political speech based on government requests, while providing exceptions for legitimate law enforcement and national security purposes. It requires platforms to prove they are not information content providers to maintain immunity.
Eric Schmitt
Senator
MO
The COLLUDE Act amends Section 230 of the Communications Act of 1934, modifying protections for interactive computer service providers. To maintain immunity from liability, providers must prove they are not information content providers. The bill removes liability protection for providers who restrict political speech based on government or government-influenced communications, with exceptions for legitimate law enforcement and national security purposes.
The COLLUDE Act – short for the "Curtailing Online Limitations that Lead Unconstitutionally to Democracy's Erosion Act" – significantly reshapes Section 230 of the Communications Act, the key law governing online platform liability. This bill removes legal protections from platforms that restrict "legitimate political speech," if that restriction is influenced by the government. Basically, if Uncle Sam asks (or even hints to) a platform to take down a post, and the platform complies, they could be held liable for that censorship.
The core change here is a big one: online platforms will now have to prove they aren't acting as "information content providers" to avoid liability in both criminal and civil cases. This is a shift from the current setup, where the burden of proof is typically on the accuser. The bill specifically targets situations where a platform restricts access to content that "appears to express, promote, limit, or suppress legitimate political speech" (Section 2(b)(1)). If that restriction stems from communication with a government entity, or even a non-government entity acting on the government's behalf, the platform loses its Section 230 shield.
Imagine a local activist group using a social media platform to organize protests against a new development project. If a city official contacts the platform and suggests the group is violating some obscure ordinance (even if it's a stretch), and the platform takes down the group's posts or suspends their account, the platform could now face legal action. The same goes for, say, a small business owner criticizing a new tax law on their business page – if a government agency leans on the platform, the platform is exposed.
There's a significant exception carved out for "legitimate law enforcement purpose" and "national security purpose" (Section 2(b)(2)). The bill defines these, but the definitions are still open to interpretation. "Legitimate law enforcement purpose" means communication that helps an agency investigate a crime within its authority (Section 2(b)(3)(A)). "National security purpose" covers a wide range of activities, including intelligence, military operations, and anything directly related to a military or intelligence mission (Section 2(b)(3)(B)).
The concern? These exceptions could be broadly applied. Could a government agency claim that suppressing criticism of a foreign policy decision is a "national security purpose"? It's not spelled out, leaving room for potential overreach.
This bill directly amends Section 230, a foundational piece of internet law. The challenge will be in how courts interpret "legitimate political speech" and the scope of the law enforcement/national security exemptions. The long-term implications are potentially huge: a chilling effect on platforms' willingness to host controversial political content, and a possible increase in government influence over what we see (and don't see) online. The practical effect is that platforms may be more hesitant to remove any content, even if it borders on harmful, for fear of losing their legal protection if a government entity had any communication about it.