This Act removes Section 230 immunity for large social media platforms that intentionally or knowingly host false information regarding election administration logistics and establishes a process for the rapid removal of such content.
Peter Welch
Senator
VT
The Digital Integrity in Democracy Act removes Section 230 immunity for large social media platforms that intentionally or knowingly host false information regarding election administration logistics. This legislation establishes a strict timeline for platforms to review and remove such objectively false content upon receiving a complete complaint. Failure to comply with these removal requirements subjects platforms to significant financial penalties enforceable by the Attorney General, State officials, or affected candidates. The Act specifically excludes content attacking political candidates or parties from this definition.
This new proposal, the Digital Integrity in Democracy Act, is a direct response to the election misinformation problem, but it’s laser-focused on one very specific type of falsehood: the nuts and bolts of how to vote. The bill carves out a rare exception to Section 230, the law that usually shields huge social media platforms (those with 25 million+ users) from liability for user content. Essentially, if a platform knowingly or intentionally hosts objectively false information about election logistics—think wrong polling place addresses, incorrect voting dates, or bogus eligibility requirements—they can lose their immunity for that specific post.
It’s important to note how narrow this exception is. This bill isn't about political opinions or even general election conspiracy theories. It explicitly excludes content that attacks or supports a political candidate, a party, or an officeholder. This is strictly about the operational details of a “covered election.” For example, if a user posts that Election Day is on a Tuesday in November (true) but claims you need a specific, non-existent ID to vote (false), that’s the kind of logistical misinformation this bill targets. If a platform is found to have hosted that kind of objectively wrong information knowingly, they open themselves up to liability.
Section 3 of the Act sets up a mandatory content removal process that puts platforms under serious time pressure. If a platform receives a formal, written complaint (called a “notification”) about false election logistics, they have 48 hours to investigate and remove it if they confirm it's false. Here’s the kicker: if the complaint comes in on “election day” (which includes the entire early voting period), that removal deadline shrinks to just 24 hours.
If the platform fails to remove confirmed false content within that tight window, the consequences are steep. The Attorney General, State Attorneys General, or even harmed candidates can sue. The penalty? A whopping $50,000 for every single piece of false information not removed on time. For platforms, this creates a massive financial incentive to err on the side of caution and speed. However, platforms get to keep their Section 230 immunity for content they remove promptly, either proactively or in response to a complaint, which is a strong shield against being sued for wrongful takedowns.
For the average voter, the benefit is clear: less noise and fewer malicious attempts to confuse you about how to cast your ballot. The platform is now financially motivated to clean up the most dangerous, logistics-based lies.
But there are practical challenges. To file a formal complaint, the bill requires the person reporting the false post to submit their full name, phone number, email, and mailing address. If you’re a concerned citizen or a whistleblower who wants to flag misinformation, you have to give up a significant amount of personal identifying information to start the process. This requirement could easily discourage reporting, especially among those who prefer to remain anonymous or fear retribution.
For the platforms, the pressure to meet the 24-hour deadline during peak election times is immense. While the bill aims to remove objectively false information, the sheer speed required means moderators might not have time for a nuanced review. This could lead to what's known as over-censorship—where platforms remove borderline or even protected speech just to avoid the $50,000 fine, impacting users whose posts might be critical of election procedures but not technically “false election administration information.” This tension between speed, liability, and free speech is where the real-world impact of this bill will be tested.