PolicyBrief
H.R. 5315
119th CongressSep 11th 2025
FAIR Act
IN COMMITTEE

The FAIR Act prohibits federal agencies from procuring large language models unless they adhere to strict standards of truthfulness, accuracy, neutrality, and transparency regarding ideological bias.

Harriet Hageman
R

Harriet Hageman

Representative

WY

LEGISLATION

New 'FAIR Act' Restricts Federal AI Purchases: Models Must Be 'Nonpartisan' and Avoid Ideological Leaning

The proposed Fair Artificial Intelligence Realization Act of 2025 (FAIR Act) isn’t about regulating your iPhone; it’s about setting strict new ground rules for how the federal government buys its advanced computer brains—specifically, Large Language Models (LLMs), the technology behind tools like ChatGPT.

The Government’s New AI Shopping List

Section 2 of the FAIR Act puts a hard stop on federal agencies buying any LLM that doesn’t meet a very specific set of criteria. Think of this as the government’s new procurement checklist for AI. The core goal is to ensure that the AI tools used by agencies—say, for drafting reports, analyzing data, or answering public inquiries—are trustworthy and fact-based. The bill mandates that any purchased LLM must be developed to be Truthful, Prioritize Accuracy (admitting when it doesn’t know something), and Stay Neutral.

Where the Rubber Meets the Road: Ideology and Accuracy

The mandate for neutrality is the most significant part of this section. The bill explicitly states these tools must be nonpartisan and cannot “twist their answers to favor specific political or ideological beliefs.” It even calls out specific concepts like “diversity, equity, and inclusion” (DEI) as examples of beliefs the AI cannot favor. Furthermore, the developers themselves are barred from “secretly bak[ing] in their own political or ideological leanings into the output” unless the user specifically prompts for that perspective. For federal agencies, this means they can’t buy an AI model if it appears to subtly push a specific social or political agenda in its standard responses.

The Implementation Headache

While aiming for accurate, non-biased government AI sounds great in theory, the practical reality of enforcing these rules is complex. The bill creates a high hurdle for AI developers who want to sell to the government. How exactly do you prove an LLM is truly “nonpartisan?” The terms “truthful,” “historical truth,” and what constitutes an “ideological leaning” are highly subjective. For example, if an agency is evaluating an LLM, and that model’s output aligns with a specific interpretation of a historical event—or if it uses certain terminology favored by one political side—could that model be rejected under this law? This vagueness could lead to significant delays for agencies trying to adopt cutting-edge AI and could also lead to disputes over which viewpoints are permissible.

Who Feels the Pinch?

This act directly impacts the tech companies that develop LLMs. If a developer’s model is trained on data sets or methodologies that an agency deems to have an “ideological leaning”—even unintentionally—they could lose out on lucrative government contracts. This could force developers to sanitize or fundamentally change their models specifically for the federal market. For federal agencies, the process of procuring new, useful AI just got a lot slower and riskier, as they must now navigate these subjective ideological checks before signing a contract. Ultimately, the success of the FAIR Act hinges entirely on how federal regulators define and measure “nonpartisan truth” in a piece of code.