The Algorithmic Accountability Act of 2025 requires large entities using automated decision systems for critical decisions to conduct mandatory impact assessments and report summaries to the FTC to ensure consumer protection.
Ron Wyden
Senator
OR
The Algorithmic Accountability Act of 2025 requires large entities using Automated Decision Systems (ADS) for "critical decisions"—like those affecting employment, housing, or finance—to conduct rigorous impact assessments. The Federal Trade Commission (FTC) is tasked with creating detailed regulations and establishing a public repository for summary reports on these systems. This legislation aims to ensure transparency, test for bias, and mandate remediation for significant negative impacts caused by automated decision-making processes.
The Algorithmic Accountability Act of 2025 is a major new effort to pull back the curtain on how AI and automated systems are making life-altering decisions about us—think jobs, housing, and healthcare. At its core, the bill requires large companies—those with over $50 million in annual receipts or handling over one million consumer identities—to conduct detailed Impact Assessments on any automated system used for a "Critical Decision." This means if a system is used to decide whether you get a loan, a job interview, or access to essential services like electricity, the company has to test it for bias and document the results.
What counts as a "Critical Decision"? The bill defines this broadly, covering anything that significantly affects a person’s life regarding employment, education, housing, essential services, healthcare, and financial services (Sec. 2). This is the big deal: it moves the conversation from abstract AI ethics to concrete, everyday impacts. For example, if a company uses AI to screen job applications, they must now document how they tested that system to ensure it doesn't unfairly screen out candidates based on race, sex, or disability—and they must keep those records for three years after the system is retired (Sec. 3).
For the companies that meet the financial or data-handling thresholds (the “Covered Entities”), this bill is a massive compliance lift. They must perform impact assessments before deploying a system and continue to monitor it afterward (Sec. 3). These assessments aren't light reading; they must include detailed testing for performance differences across protected groups, documentation of stakeholder consultations, and a clear plan to fix any negative impacts found (Sec. 4). If a system is found to be causing significant harm, the company has to try to eliminate or reduce it quickly, or publicly justify why they didn't, citing a "compelling interest" (Sec. 4).
This means a major bank using an automated loan approval system can't just deploy it and hope for the best. They must document the data used, show how they tested for racial or gender bias in denial rates, and keep a log of every time a consumer challenges a decision. This compliance burden is significant, leading to higher operational costs for these large entities, which could eventually be passed down to consumers.
The bill dramatically increases the Federal Trade Commission’s (FTC) power and technical capacity to enforce these rules. It creates a brand-new Bureau of Technology within the FTC, led by a Chief Technologist, and authorizes the hiring of at least 50 new tech experts, bypassing standard civil service hiring rules to get top talent quickly (Sec. 8). This is the government recognizing that you can't regulate AI with staff who only understand paper files. This new tech bureau will be responsible for enforcing the law and providing technical assistance to other agencies.
One of the most consumer-facing parts of the bill is the requirement for the FTC to create a publicly accessible online repository (Sec. 6). While companies only submit a summary of their full assessment, the FTC must publish key details, including the name of the company, the critical decision the system is making (like "housing rental eligibility"), the data sources used, and, crucially, a link to the webpage where consumers can challenge or appeal the decision. This means if you get rejected for an apartment by an algorithm, you will have a clear, centralized place to find out who made the decision and how to fight it.
However, the FTC must balance this transparency with the need to protect sensitive business information (Sec. 6). Companies are only required to share a summary report, not the full, detailed impact assessment, which limits the scope of public oversight. Ultimately, this legislation aims to make the complex, often invisible world of algorithmic decision-making more transparent and fair, giving consumers a clear path to appeal decisions that could drastically affect their lives.