This bill amends Section 230 to impose a duty of care on social media platforms regarding their recommendation algorithms, with a loss of liability protection for violations that lead to bodily injury or death.
Mike Kennedy
Representative
UT-3
The Algorithm Accountability Act amends Section 230 to impose a duty of care on social media platforms regarding their recommendation algorithms, requiring them to prevent foreseeable bodily injury or death caused by these systems. Failure to meet this standard results in the loss of existing liability protections under Section 230. The Act also establishes a private right of action, allowing victims to sue platforms for damages resulting from algorithmic harm. This new duty specifically excludes simple chronological sorting or initial search result curation, while upholding First Amendment protections.
This bill, titled the Algorithm Accountability Act, takes a swing at Section 230 of the Communications Act, the law that generally shields social media companies from liability for content posted by users. Specifically, it amends Section 230 by creating a new "duty of care" for large social media platforms regarding their recommendation algorithms. If passed, platforms with over 1 million users must use "reasonable care" when designing and operating these algorithms to prevent "reasonably foreseeable" bodily injury or death caused by the algorithm’s design or performance. If they fail this duty, they lose their Section 230 protection and can be sued directly.
Think of this as the government stepping in and saying, "Your algorithm isn't just a suggestion box anymore; it has real-world consequences." The core of this bill is that new duty of care (SEC. 2. (1)(A)). It applies to the algorithms that rank, order, and amplify content based on your personal data—the stuff that keeps you scrolling. The goal is to prevent scenarios where the algorithm pushes a user toward self-harm, dangerous challenges, or content that incites violence against others, resulting in physical injury or death. For example, if a platform's algorithm is designed to prioritize extreme content that leads a user to attempt a dangerous, viral stunt, and that outcome was foreseeable, the platform could be on the hook.
Crucially, this duty of care does not apply if you are just looking at content sorted chronologically (newest first) or if you are viewing the initial results of a specific search you initiated. The focus remains squarely on the personalized recommendation engine that drives engagement by predicting what you want to see next. If a platform violates this new duty, it loses the liability shield of Section 230 (SEC. 2. (1)(D)). This is a massive shift, as it opens the door to direct lawsuits.
If someone suffers bodily injury or death due to a platform's failure to meet this algorithmic duty of care, they or their representatives get a "private right of action" (SEC. 2. (1)(E)). This means they can sue the platform in federal court for both compensatory and punitive damages. Even more significant for the platforms: the bill invalidates any pre-dispute agreements, like arbitration clauses or waivers of joint action, related to this new duty (SEC. 2. (1)(F)). For the average user, this means if you are harmed by an algorithm, you won't be forced into private arbitration; you can take the company to court.
On one hand, this is a clear win for accountability. It forces platforms to prioritize user safety in their design, potentially leading to algorithms that are less likely to amplify dangerous or extreme content. On the other hand, the language is vague. What exactly constitutes "reasonable care" in algorithm design? What is "reasonably foreseeable" harm? These are subjective standards that will likely have to be defined through years of expensive litigation. This uncertainty could lead platforms to over-correct, potentially restricting or down-ranking lawful, but controversial, content to reduce their legal risk—a phenomenon known as the "chilling effect." They are explicitly told they can't enforce this based on a viewpoint protected by the First Amendment (SEC. 2. (1)(C)), but in practice, platforms might play it safe and censor more just to avoid a lawsuit.