This Act prohibits U.S. federal agencies from acquiring or using artificial intelligence tools developed by designated foreign adversaries.
John Moolenaar
Representative
MI-2
The No Adversarial AI Act establishes a process for identifying and publicly listing Artificial Intelligence (AI) tools developed by foreign adversaries. This legislation then prohibits U.S. executive agencies from acquiring or using these listed AI systems. Exceptions to this ban are narrowly defined, requiring high-level written approval if the AI is deemed absolutely necessary for specific national security or critical mission functions.
The aptly named No Adversarial AI Act is the federal government’s new plan to purge perceived security risks from its technology stack. Simply put, this bill establishes a process to identify and blacklist Artificial Intelligence (AI) tools developed by designated “foreign adversaries,” and then forces every federal agency to stop using them, fast. It’s a major supply chain security move that creates a public list of prohibited tech, putting the Federal Acquisition Security Council (FASC) and the Office of Management and Budget (OMB) firmly in the driver’s seat of federal AI procurement.
This law kicks off with a mandate to create a public blacklist. The FASC has just 60 days to compile an initial list of AI systems developed by a “foreign adversary.” Within 180 days of the law’s enactment, the OMB Director must post that list online for everyone to see (Sec. 2). This list isn't static; the FASC is required to review and update the prohibited systems at least every 180 days. For companies that find their AI on the list, there’s a path to appeal, but it’s a high bar: they must provide a sworn statement and proof that their product isn’t linked to an adversary, and the FASC must certify its removal.
Once an AI system lands on that blacklist, federal agencies have 90 days to take action. The head of every executive agency must review their current AI tools and plan to exclude or remove any system provided by an entity linked to a foreign adversary (Sec. 3). This is a hard deadline that could cause serious headaches for agencies relying on that tech right now. For example, if a blacklisted AI system is currently used by the Department of Energy for modeling climate data, they have three months to find, test, and implement a compliant replacement, or secure an exception.
Exceptions to the ban are narrow and require high-level sign-off. An agency head can only keep using a restricted AI if they determine it is absolutely necessary for one of four specific reasons, and they must notify the OMB Director and relevant Congressional committees in writing (Sec. 3). The exceptions are limited to scientifically valid research, evaluation/testing, counterterrorism/counterintelligence, or if not using the AI would “seriously jeopardize” the agency’s mission-critical functions. This means if you’re a federal researcher using a cutting-edge foreign AI for a non-critical project, you’re likely out of luck.
One of the most complex parts of this bill is how it defines a “Foreign adversary entity.” This isn't just about companies headquartered in an adversary nation. It also includes any entity where a foreign person or group from that country owns at least a 20 percent stake, directly or indirectly (Sec. 3). This broad definition could easily ensnare international companies that are otherwise friendly to the U.S. but have minority investment from a blacklisted nation. It creates a massive compliance headache for global tech firms that sell to the U.S. government, requiring them to constantly audit their ownership structures to ensure they don't accidentally exceed that 20% threshold and get shut out of federal contracts.