PolicyBrief
S. 1792
119th CongressMay 15th 2025
AI Whistleblower Protection Act
IN COMMITTEE

This Act establishes protections against retaliation for employees who report security vulnerabilities or violations related to Artificial Intelligence systems.

Charles "Chuck" Grassley
R

Charles "Chuck" Grassley

Senator

IA

LEGISLATION

New AI Whistleblower Act Protects Employees Reporting Tech Flaws, Voids Mandatory Arbitration

The AI Whistleblower Protection Act is designed to shield employees and contractors who raise concerns about security flaws or potential violations in Artificial Intelligence (AI) systems used by their employers. This bill is a big deal for anyone working near advanced tech, as it creates a legal safety net if you spot something dangerous and decide to speak up. It essentially says that if you report an AI issue—whether it’s a security vulnerability or a violation that creates a serious risk to public safety—your employer cannot retaliate against you.

The bill gets specific about what counts as a problem. An AI security vulnerability is defined as any flaw that could allow someone to steal or illegally access advanced AI technology. An AI violation is broader; it covers breaking federal laws related to AI or, crucially, failing to act quickly when an AI system poses a "serious, specific risk to public safety, health, or national security." The definition of AI itself is wide-ranging, covering everything from machine learning algorithms to systems that mimic human thought, though it carves out exceptions for common embedded software like your word processor.

Your New Safety Net: What You Can Report

If you're a developer, engineer, data scientist, or even an independent contractor—the bill calls you a "Covered Individual"—you’re protected if you report an AI problem. This protection applies whether you report internally to a supervisor, or externally to the Attorney General, a regulatory agency, or even a member of Congress. Think of it as a clear path to report potential dangers without fear of getting fired. For example, if you’re a software engineer and notice that your company’s new AI-driven medical diagnostic tool has a critical flaw that could misdiagnose patients, reporting that flaw is now legally protected.

The Enforcement Power: Double Back Pay and No Waivers

If your employer does retaliate—say, they fire you or demote you after you make a protected report—you have two clear options. You can file a complaint with the Secretary of Labor, or, if the Department of Labor doesn't issue a final decision within 180 days (and you haven't caused the delay), you can take your employer straight to federal court and request a jury trial. The remedies are substantial and designed to be a serious deterrent against retaliation. If you win, you are entitled to be reinstated to your job with the same seniority, plus twice the amount of back pay you lost, along with interest, and your legal fees. This double back pay provision is a strong signal that the law is serious about protecting whistleblowers.

Perhaps the most impactful protection for the modern worker is found in the section that voids certain contracts. The bill explicitly states that employers cannot make you sign away these rights. This means any contract, agreement, or policy—including the increasingly common mandatory arbitration agreements—that attempts to stop you from seeking relief under this section is unenforceable. For many in the tech industry, this is a significant win, as it restores the right to sue in court rather than being forced into private arbitration.

The Fine Print: Where Things Get Fuzzy

While this bill offers strong protections, there are a couple of areas that might cause headaches during implementation. The definition of an “AI violation” hinges on terms like “serious, specific risk to public safety.” These terms are subjective, which could lead to disputes. What one person sees as a serious, imminent risk, a company’s legal team might argue is a minor, manageable issue. This vagueness could potentially open the door to legal arguments about when a violation actually occurred, and it might also invite some less-than-critical reports, increasing litigation costs for companies that are already investing heavily in AI development.