PolicyBrief
H.R. 8094
119th CongressMar 26th 2026
AI Foundation Model Transparency Act of 2026
IN COMMITTEE

This bill mandates that providers of powerful AI foundation models disclose detailed information regarding their training data, safety protocols, and performance benchmarks to the Federal Trade Commission for public transparency.

Donald Beyer
D

Donald Beyer

Representative

VA-8

LEGISLATION

AI Foundation Model Transparency Act Mandates Public Safety Disclosures for Tech Giants by 2027

The AI Foundation Model Transparency Act of 2026 is essentially a 'nutrition label' law for the most powerful artificial intelligence systems. It targets 'foundation models'—the heavy-duty AI engines like those behind ChatGPT—that have either 10 million monthly users or were built using massive amounts of computing power (specifically over 10^26 operations). Within a year of this bill becoming law, the Federal Trade Commission (FTC) must roll out rules requiring these tech companies to pull back the curtain on how their AI is built, what data it’s eating, and where it might go off the rails. The goal is to move away from 'black box' technology toward a system where the public and regulators actually know what’s under the hood before these tools are used for high-stakes decisions like hiring or medical advice.

Opening the Black Box

Under Section 2, companies can't just release a massive AI model and hope for the best. They will have to submit detailed reports to the FTC and post consumer-friendly summaries on their own websites. This includes a breakdown of their training data—basically the digital library the AI studied—and how they keep that data secure. For example, if an AI is being used by a bank to decide who gets a home loan, this law requires the developer to disclose how the model was tested to avoid bias and what precautions are in place to prevent it from spitting out harmful or inaccurate financial info. It’s about making sure that if an AI makes a mistake that affects your life, there’s a paper trail showing how that AI was vetted in the first place.

Guardrails for High-Stakes Tech

The bill specifically flags 'high-risk' areas where AI mistakes could be disastrous. Companies must disclose how their models perform on benchmarks related to national security, cybersecurity, elections, and healthcare. If you’re a patient using an AI-powered health app, or a voter worried about deepfakes during an election, these provisions are designed to ensure the underlying tech has been stress-tested against spreading misinformation or leaking sensitive data. Section 2 also requires companies to disclose their 'training data cutoff date'—the point at which the AI stopped learning new facts—so users know if they're getting advice based on yesterday’s news or a decade-old manual.

Protecting the Little Guys and the Open Web

While the bill puts the squeeze on big tech, it includes a 'hall pass' for the open-source community. Fully open-source models are exempt, which helps developers who share their code freely to keep innovating without getting buried in paperwork. For the smaller startups—those in business for less than a year or meeting small business size standards—the FTC is required to provide a 'technically proficient representative' to help them navigate the rules. These startups also get a three-month grace period before any penalties kick in, ensuring a single filing error doesn't tank a new business before it gets off the ground.

Enforcement and the 'Secret' Clause

To keep companies honest, the FTC will treat violations as 'unfair or deceptive acts,' which carries the weight of federal fines. However, there is a notable caveat: companies are allowed to redact (black out) certain information if they can justify it for national security or trade secret reasons. While this protects sensitive code from hackers, it’s the part of the bill that bears watching—if the 'justification' for hiding info is too broad, the transparency the bill promises might end up with a lot of holes in it. The FTC will have to play referee, reviewing these redactions annually to make sure companies aren't just using 'security' as an excuse to hide flaws.