PolicyBrief
S. 3680
119th CongressJan 15th 2026
Eliminating Bias in Algorithmic Systems Act of 2026
IN COMMITTEE

This bill establishes requirements for federal agencies to address bias and discrimination in automated decision-making systems by mandating civil rights offices and regular reporting on algorithmic harms.

Edward "Ed" Markey
D

Edward "Ed" Markey

Senator

MA

LEGISLATION

Eliminating Bias in Algorithmic Systems Act: New Civil Rights Oversight for Federal AI Decisions Kicks Off in 2026.

The federal government is increasingly using algorithms to decide everything from who gets a small business loan to how veterans access healthcare. The Eliminating Bias in Algorithmic Systems Act of 2026 is a move to ensure those 'black box' decisions aren't baking in old-school discrimination. Under this bill, any federal agency that uses, funds, or even advises on complex algorithms—think machine learning or AI—must now have a dedicated Office of Civil Rights staffed with tech experts. These aren't just IT roles; they are specialists tasked specifically with hunting down bias related to race, age, disability, and even income level within the software the government relies on.

The Tech Police in the Hallways of Power

This bill targets 'covered algorithms,' which is policy-speak for any automated process that materially affects your life—like the cost of a program or your eligibility for government-regulated rights (Section 2). For example, if an AI is used to screen applicants for a federal housing voucher, this law requires an expert to verify that the code isn't accidentally filtering people out based on their zip code or source of income. By requiring these offices to be established, the bill moves AI oversight from a vague 'best practice' to a mandatory job requirement for agencies like the Department of Labor or the VA.

Showing the Receipts Every Two Years

Transparency is the main lever here. Starting one year after enactment, and every two years after that, these civil rights offices have to file a public report to Congress (Section 3). They must detail the risks they’ve found, what they’ve done to fix them, and—importantly—who they’ve talked to. This means they can’t just huddle in a room; they are required to engage with 'stakeholders,' which includes everyone from industry leaders and academic experts to worker organizations and the actual people affected by these systems. It’s an attempt to make sure the software used to manage public life is vetted by the public it serves.

Connecting the Dots and the Dollars

To keep agencies from working in silos, the Assistant Attorney General will lead a new interagency working group. This group acts like a central hub where civil rights experts from different departments can share notes on the latest AI risks and how to stop them. While the bill authorizes 'whatever sums are necessary' to fund these new offices, the actual impact will depend on whether future budgets match that ambition. Because the definition of a 'covered algorithm' is somewhat broad (Section 2), there is a chance that agencies could get bogged down in paperwork for minor systems, but the goal is clear: making sure that when a computer says 'no,' it’s not because of a biased line of code.