PolicyBrief
H.R. 7058
119th CongressJan 14th 2026
Foreign Adversary AI Risk Assessment and Diplomacy Act
IN COMMITTEE

This Act mandates a comprehensive risk assessment and the development of a diplomatic strategy to counter threats posed by foreign adversaries utilizing generative artificial intelligence.

Michael Baumgartner
R

Michael Baumgartner

Representative

WA-5

LEGISLATION

New Bill Mandates State Dept. to Assess Foreign Adversary AI Risks and Develop Diplomatic Strategy

Alright, let's talk about something that sounds super techy and government-y but actually hits pretty close to home: artificial intelligence from countries that aren't exactly our best friends. The new Foreign Adversary AI Risk Assessment and Diplomacy Act is basically telling the State Department, "Hey, we need you to figure out what kind of trouble foreign adversaries could cook up with AI and then come up with a plan to deal with it." Specifically, the Secretary of State, working with the Commerce Secretary and the Director of National Intelligence, has 180 days to deliver a comprehensive report to Congress. This report needs to break down how generative AI—think things that create fake images, videos, or text—could mess with our national security, our economy, and even our democratic process. After that, they've got a year to roll out a full-blown diplomatic strategy to tackle these risks.

Decoding the AI Threat

So, what exactly are they looking for in this assessment? The bill, in Section 1, asks for a deep dive into how these AI applications could be used for spreading disinformation, launching cyberattacks, or creating other security headaches. It's not just about what could happen, but also a reality check on where we stand against these foreign players in the AI race. Think about it: if a foreign power can generate hyper-realistic fake news stories that spread like wildfire, that's a problem for everyone trying to figure out what's real online. This part of the bill is all about getting a clear picture of the battlefield before we even think about fighting.

The Diplomatic Playbook

Once we know what we're up against, the next step, also laid out in Section 1, is to build a diplomatic strategy. This isn't just about us going it alone. The State Department is tasked with engaging allies and partners to create shared rules and standards for managing these AI risks. Imagine trying to get a bunch of different countries to agree on how to handle something as complex as AI; it's like trying to get everyone to agree on what to order for lunch, but on a global scale. The goal is to push back against the malicious use of AI through diplomacy and promote the kind of secure, trustworthy AI development that aligns with our values. It’s about building a global team effort to keep AI from becoming a weapon.

Annual Check-ins and Public Transparency

This isn't a one-and-done deal. Section 3 of the bill requires the Secretary of State to submit annual assessments to congressional committees for the next three years. These assessments will look back at incidents from the previous year where foreign adversaries tried to use generative AI for malicious activities against the U.S. or our allies. This includes everything from spreading propaganda to enhancing military capabilities or cyber operations. The cool part? These assessments need to be unclassified and posted on a publicly available Department of State website. This means you, me, and anyone else who's interested can actually see what's going on, giving us a clearer understanding of the digital threats we face. It’s a step towards keeping everyone in the loop, which is pretty rare for national security stuff.