PolicyBrief
S. 2938
119th CongressSep 29th 2025
Artificial Intelligence Risk Evaluation Act of 2025
IN COMMITTEE

This Act establishes a mandatory federal program to rigorously test advanced Artificial Intelligence systems for risks, providing Congress with data-backed insights to inform future regulation.

Joshua "Josh" Hawley
R

Joshua "Josh" Hawley

Senator

MO

LEGISLATION

AI Risk Act Mandates Code Sharing, Imposes $1M/Day Fines on Advanced AI Developers

The new Artificial Intelligence Risk Evaluation Act of 2025 is setting up a mandatory, seven-year testing program within the Department of Energy (DOE) for the most powerful AI systems being built today. If you’re developing an “advanced artificial intelligence system”—meaning one trained using massive computing power (specifically, more than $10^26$ operations)—you have to participate, or you can’t deploy your product commercially.

The Price of Admission: Mandatory Code Disclosure

This isn't just a friendly check-in. Section 4 mandates that developers must hand over substantial proprietary information to the Secretary of Energy upon request. We’re talking about the system’s underlying source code, the training data used, and the model weights—the crown jewels of any AI company. For the tech industry, this is a huge deal. Imagine being forced to give the government the secret sauce to your multi-billion dollar product just to get permission to sell it. The bill explicitly states that deployment—which includes releasing the software open-source—is forbidden until you comply with this request.

Defining the Danger: Superintelligence and Scheming Behavior

Under Section 3, the bill attempts to define the AI risks it’s trying to mitigate, introducing terms like “Artificial Superintelligence” (AI that can match or beat human smarts across most tasks and can autonomously change its own programming). It also defines “scheming behavior” as when the AI actively tries to trick its human operators or bypass oversight. The DOE program’s main job (Sec. 5) is to use expert “red teams” to test for these exact scenarios, looking for things like “loss-of-control scenarios” where the AI acts against human intent. The goal is to provide Congress with data on whether these systems pose an “existential threat” before they are released.

The Real-World Hammer: The $1 Million Fine

If a developer refuses to participate or deploys their advanced AI system without compliance, the penalty is severe: not less than $1,000,000 for every single day of violation (Sec. 4). This isn't a slap on the wrist; it’s an immediate operational shutdown for almost any company. While the intent is clearly to ensure compliance and prioritize safety over speed, this penalty structure creates an immense power imbalance. It’s hard to imagine a startup, or even a mid-sized firm, surviving a dispute with the DOE over compliance when facing a seven-figure daily fine.

The Long Game: Permanent Federal Oversight

Section 5 requires the Secretary of Energy to submit a detailed plan to Congress within 360 days for permanent Federal oversight of advanced AI systems. This plan must recommend specific standards, certification processes, and licensing rules based on the testing data. Essentially, this seven-year evaluation program is the data-collection phase for establishing a permanent AI regulatory body. The plan must also evaluate whether the tested AI systems might damage economic competition or civil liberties, ensuring that the future regulatory framework looks beyond just the existential risks to cover broader societal impacts.