PolicyBrief
H.R. 5681
119th CongressOct 3rd 2025
STOP HATE Act of 2025
IN COMMITTEE

The STOP HATE Act of 2025 mandates that large social media companies publicly detail their policies against terrorist content and report their enforcement actions against such content to the Attorney General three times annually.

Josh Gottheimer
D

Josh Gottheimer

Representative

NJ-5

LEGISLATION

STOP HATE Act Forces Social Media Giants to Report Terror Content Enforcement Three Times a Year, Penalties Hit $5M Daily

The Stopping Terrorists Online Presence and Holding Accountable Tech Entities Act of 2025—the STOP HATE Act—is a regulatory heavyweight aimed squarely at the biggest social media platforms. Think Facebook, X, Instagram, TikTok, and any other site under FTC jurisdiction that lets you create a profile, share content, and has at least 25 million monthly U.S. users. The core of this bill is transparency and accountability: it forces these companies to publish exactly how they handle content from designated foreign terrorist organizations and individuals, and then report their enforcement actions to the federal government three times a year.

The New Rules of the Road for Big Tech

Within six months of this law passing, covered platforms must clearly publish their terms of service, specifically spelling out their policy on designated terrorist content. This isn't just about having a policy; they have to detail the exact process for users to flag content, how quickly the company promises to respond, and the range of actions they might take—from removing a post to kicking a user off the platform. If you’ve ever tried to report something online and felt like your feedback went into a black hole, this section (SEC. 2) is designed to make that process crystal clear and hold platforms to their word.

The Triannual Report Card: Data Dump to the DOJ

The real teeth of the STOP HATE Act are in the reporting requirements. Three times a year (January 31, April 30, and October 31), these companies must send a detailed report to the Attorney General. This isn't just a summary; they have to provide granular data on every piece of terrorist content flagged and “actioned.” "Actioned" means the content was removed, demonetized, or even just deprioritized in the feed, or the user faced sanctions.

Imagine the data they have to track: they must report how many times content was viewed or shared before it was dealt with. They also have to break down whether the content was flagged by an employee, AI, community moderators, or a regular user, and who ultimately made the decision to take action. This level of detail is meant to show exactly how effective—or ineffective—a platform’s moderation systems are. The Attorney General then takes all this raw data and puts it into a searchable public database on the Department of Justice website, pulling back the curtain on how these giants police their own digital streets.

The $5 Million Question: Penalties and Pressure

Why would companies comply with such a massive data collection effort? Because the cost of failure is astronomical. If a company misses a report deadline, fails to publish its terms, or knowingly misrepresents information, the Attorney General can seek a civil penalty of up to $5,000,000 per violation, per day. That kind of financial hammer makes compliance a top-tier priority. For a platform, the risk of a multi-million dollar fine every single day creates a powerful incentive to over-moderate, potentially leading to the removal of content that is borderline but not actually illegal, a phenomenon known as a “chilling effect” on speech.

While the bill explicitly states that nothing in it is meant to infringe on First Amendment rights, the sheer weight of the potential penalties could push platforms to err on the side of caution and remove more content than necessary just to avoid a regulatory headache. This high-stakes environment means that while we gain transparency on terrorist content, we might also see platforms becoming more aggressive in their general content moderation to reduce risk.

It’s also important to note that this entire section has a five-year expiration date (a "sunset clause"), suggesting the government intends for this to be a trial run. This bill is a massive data grab designed to hold platforms accountable, but the real-world impact for users will be determined by whether platforms can balance the pressure of a $5 million daily fine with the protection of free speech.