PolicyBrief
H.R. 5511
119th CongressSep 19th 2025
Algorithmic Accountability Act of 2025
IN COMMITTEE

This bill establishes federal oversight, led by the FTC, requiring large entities to conduct impact assessments and report on high-stakes "covered algorithms" to ensure fairness and mitigate consumer harm.

Yvette Clarke
D

Yvette Clarke

Representative

NY-9

LEGISLATION

New Algorithmic Accountability Act Forces Large Tech Firms to Test AI for Bias in Critical Decisions

The Algorithmic Accountability Act of 2025 is the federal government’s first serious attempt to put guardrails around the AI systems that increasingly run our lives. The core idea is simple: if a company uses a complex algorithm—a “covered algorithm”—to make a “critical decision” about you, they have to prove that system isn't secretly screwing you over.

What counts as a critical decision? Think about the stuff that actually matters: getting a job, securing a mortgage, finding housing, or accessing essential utilities and healthcare. The bill makes it clear that the Federal Trade Commission (FTC) gets two years to write the detailed rules, but once those rules are set, companies must conduct detailed “Impact Assessments” before and during the deployment of these systems.

Who Has to Play by the New Rules?

This isn't aimed at the local pizza shop using an algorithm to optimize delivery routes. The law targets “Covered Entities”: big players who either pull in over $50 million in annual gross receipts or have an equity value over $250 million, and handle identifying information for more than 1 million consumers, households, or devices. If you meet those thresholds, you’re the one who has to read the fine print.

For these large companies, the compliance cost is going to be significant. Section 4 mandates incredibly detailed documentation. They must compare the new algorithm's performance against the old way of doing things, keep meticulous records of stakeholder consultations, and continually test for privacy risks. Most importantly, they must check for bias and differential outcomes based on protected characteristics like race or age. If they find a significant negative impact, they have to try and fix it immediately. If they choose not to fix it, they must document a compelling, non-discriminatory reason why.

The FTC Gets a Tech Upgrade

To enforce these complex rules, the FTC is getting a major overhaul under Section 8. The bill establishes a brand new Bureau of Technology within the FTC, led by a Chief Technologist. Crucially, the FTC Chair is authorized to hire at least 75 specialized staff—engineers, data scientists, and privacy experts—bypassing standard federal civil service hiring rules. This is a clear signal that the government recognizes it needs technical muscle to regulate modern tech, but bypassing those standard hiring checks is a big move that concentrates significant power in the agency's hands.

A Window into the Black Box

While the full, proprietary Impact Assessments remain confidential (Section 4), the public will get a limited view. Covered Entities must submit annual “Summary Reports” to the FTC, detailing their testing methods, performance metrics, and any negative impacts found and mitigated. Section 6 requires the FTC to take this information and create a publicly accessible, searchable online database.

This repository won't reveal trade secrets, but it will list the company, what critical decision the algorithm makes (e.g., approving job applicants), the data sources used, and, most importantly for consumers, a link to instructions on how to appeal or correct a decision made by that specific algorithm. For researchers and consumer advocates, this is huge—it provides the first high-level, standardized look at how AI is being deployed across critical sectors.

What Does This Mean for You?

If you're applying for an apartment, a loan, or a new job, this Act is designed to protect you from being unfairly screened out by an algorithm that was never properly checked for bias. If the system denies you, the company will eventually have to provide a clear path for appeal, and the FTC will have the technical staff to investigate if that system is fundamentally unfair.

For companies, the message is clear: accountability is coming, and it's expensive. The bill sets a detailed federal standard, but Section 11 explicitly states that this law does not preempt state or local laws. This means if your state or city passes stricter AI accountability rules, those rules still apply. This lack of preemption ensures that the federal law acts as a floor, not a ceiling, for consumer protection in the age of algorithms.