PolicyBrief
H.R. 7110
119th CongressJan 15th 2026
Eliminating Bias in Algorithmic Systems Act of 2026
IN COMMITTEE

This Act mandates federal agencies to establish civil rights offices and report biennially on efforts to mitigate bias and discrimination in covered algorithmic systems that affect protected characteristics.

Summer Lee
D

Summer Lee

Representative

PA-12

LEGISLATION

New Act Requires Federal Agencies to Tackle AI Bias, Mandates Civil Rights Offices by 2027

Alright, let's talk about something that's becoming a bigger part of our lives than we might realize: the algorithms that federal agencies use. Think about it: whether it's for getting a loan, accessing a government program, or even how you're assessed for certain benefits, these automated systems are often at play. The Eliminating Bias in Algorithmic Systems Act of 2026 is stepping in to make sure these systems play fair.

This bill basically tells federal agencies they need to get serious about algorithmic bias. It defines a "covered algorithm" as any computational process, like AI or machine learning, that could significantly affect things like your access to agency programs, economic opportunities, or even your protected rights. And when it talks about "protected characteristics," it means all the usual suspects: race, gender, age, disability, income level, and a bunch more, including newer categories like biometric information.

Setting Up the Watchdogs

One of the biggest moves this bill makes is requiring every federal agency that uses or oversees these algorithms to have a Civil Rights Office. And these aren't just paper-pushing roles; these offices need to be staffed with actual experts and technologists who understand how bias, discrimination, and other harms can creep into AI systems. Their whole job is to focus on these issues, making sure that algorithms don't unfairly target or disadvantage people based on those protected characteristics. It's like putting a dedicated quality control team on the code that affects your daily life.

Reporting Back to HQ

These new civil rights offices won't just be working in the shadows. The bill mandates that they submit detailed reports to Congress every two years, starting within one year of the law passing. These reports need to spill the beans on the current state of algorithmic tech within their agency's turf, all the risks related to bias and discrimination, and what steps they've actually taken to fix things. They also have to detail how they've engaged with everyone from industry reps and civil rights advocates to academic experts and the folks directly affected by these algorithms. Plus, if they see something that needs a legislative fix, they're supposed to recommend it. This means more transparency and a clearer path for accountability, which is a win for anyone who's ever felt like a decision was made by a black box.

Teamwork Makes the Dream Work (or at least, less biased algorithms)

To keep everyone on the same page, the Assistant Attorney General for the Civil Rights Division at the Department of Justice will be setting up an interagency working group on algorithms and civil rights. Every new agency civil rights office will be a part of this group. This is a smart move because it ensures different agencies aren't reinventing the wheel or missing common issues. It's about creating a unified front against algorithmic bias across the federal government, fostering a shared understanding and potentially better solutions than if each agency worked in isolation. The bill also authorizes Congress to appropriate the necessary funds to make all of this happen, which is crucial because good intentions without resources often go nowhere.

What This Means for You

In plain English, this bill is trying to put some guardrails on the increasingly complex world of government algorithms. If you've ever applied for a federal program, sought economic assistance, or even just interacted with a government website that uses some smart tech, this bill aims to ensure those systems are fair and don't inadvertently discriminate against you based on who you are. It's about bringing human rights and fairness into the digital age of government services, making sure that the tech designed to help us doesn't end up hurting us instead. It's a foundational step towards making sure our digital future is more equitable.