This bill establishes a duty of care for social media platforms regarding their recommendation algorithms, allowing lawsuits for resulting bodily injury or death by removing certain Section 230 liability protections.
John Curtis
Senator
UT
The Algorithm Accountability Act amends Section 230 to impose a new duty of care on social media platforms regarding their recommendation algorithms. This duty requires platforms to use reasonable care to prevent foreseeable bodily injury or death resulting from the algorithm's operation. A violation of this duty strips the platform of its standard liability protection, allowing victims to sue for damages in federal court. The law specifically exempts content displayed chronologically or in direct response to a user's initial search query.
The “Algorithm Accountability Act” is taking aim at the biggest tech companies, fundamentally changing the rules for liability when their recommendation algorithms cause physical harm. This bill creates a new “duty of care” for large social media platforms (those with over 1 million users), requiring them to use “reasonable care” when designing, training, and deploying algorithms to prevent “reasonably foreseeable bodily injury or death.”
To understand what this means, you have to know about Section 230 of the Communications Act. Right now, Section 230 acts like a massive legal shield, protecting platforms from liability for most content posted by their users. This bill chips away at that shield, but only under very specific circumstances. If a platform violates this new duty of care, and that violation leads to someone’s bodily injury or death—whether the victim is a user or someone harmed by a user—the platform loses its Section 230 protection and can be sued in federal court for damages, including punitive damages. This means that if an algorithm promotes content that leads to a user self-harming or encourages a user to commit an act of violence against a third party, the platform could be on the hook.
This isn't a blanket rule for all social media. The new liability only applies to recommendation-based algorithms—the automated systems that rank, order, promote, or amplify content based on your personal data and preferences. Think of the “For You” page on TikTok or the suggested videos on YouTube. If you’re a busy professional who relies on these platforms for information or entertainment, the platforms now have a legal incentive to make sure their engagement-driven algorithms aren't pushing extreme or dangerous content that could lead to physical harm. The bill also makes it clear that platforms cannot use pre-dispute arbitration clauses to block these lawsuits, meaning victims get their day in court.
There are significant carve-outs that limit the scope of this new rule. First, the liability shield remains fully intact for smaller platforms (those with fewer than 1 million registered users). Second, the “duty of care” does not apply to content displayed in simple chronological or reverse chronological order. If you’re scrolling a feed that’s just showing you the latest posts in order, the platform is still fully protected. It also doesn't apply to the initial results page of a direct search query. This means platforms could simply offer users a non-algorithmic feed option to reduce their risk.
Crucially, the bill explicitly states that this new law cannot be enforced based on the viewpoint of any speech protected by the First Amendment. This is the bill’s attempt to balance safety requirements with free speech concerns, ensuring platforms aren't penalized simply for hosting controversial but legal content.
The biggest challenge and the source of potential litigation is the term “reasonable care.” What constitutes “reasonable care” when designing a massive, constantly evolving AI system? The bill requires platforms to show reasonable care in the designing, training, testing, deploying, operating, and maintaining of their algorithms. This standard is inherently vague and will likely be defined over time by courts and juries. For platforms, this vagueness is a huge risk because it makes it difficult to know exactly what steps they need to take to avoid a lawsuit. For plaintiffs, it offers flexibility, but it also means they face a high burden of proof to show that the platform’s specific design choices were negligent and directly led to physical harm. It’s a high-stakes legal tightrope walk for everyone involved.