PolicyBrief
H.R. 1736
119th CongressNov 19th 2025
Generative AI Terrorism Risk Assessment Act
HOUSE PASSED

This Act mandates the Department of Homeland Security to annually assess and report on terrorism threats posed by the use of generative artificial intelligence.

August Pfluger
R

August Pfluger

Representative

TX-11

LEGISLATION

DHS Mandates 5-Year Review of AI Terrorism Risk: Unclassified Reports Coming Annually

This bill, the Generative AI Terrorism Risk Assessment Act, is straightforward: it puts the Department of Homeland Security (DHS) in charge of figuring out exactly how foreign terrorist organizations are planning to use generative AI—think ChatGPT, Midjourney, or similar tools—to cause trouble. Specifically, the Secretary of Homeland Security must produce an annual assessment for the next five years, starting one year after the bill is enacted, detailing this threat. The goal is to get a handle on a national security risk that Congress believes is not yet fully understood (SEC. 2).

The New Threat Assessment: AI for Radicalization and WMDs

These annual reports aren't just general hand-waving. They have to zero in on two specific, high-stakes areas. First, they must analyze how foreign groups are using generative AI to spread violent messages, radicalize, and recruit people. This could mean AI-generated propaganda videos or personalized, highly effective deep-fake messaging. Second, and perhaps more concerning, the reports must analyze attempts to use AI to “improve their ability to create or use chemical, biological, radiological, or nuclear weapons” (SEC. 3). If you’re in tech development, this is a clear signal that the federal government is watching how AI models could lower the barrier to entry for genuinely dangerous materials.

The Information-Sharing Mandate

To create these reports, the bill formalizes a major intelligence-sharing effort. DHS is required to work closely with the Office of the Director of National Intelligence (ODNI). On top of that, the FBI, the entire Intelligence Community, and state and local Fusion Centers—those local hubs where federal, state, and local law enforcement share information—must share everything they know about generative AI terrorism threats with the Secretary of Homeland Security (SEC. 3). This means a massive amount of data and analysis will be flowing into DHS, which will then incorporate it into its assessments and share it back out to the local centers.

For the average person, this sharing mandate is a double-edged sword. On one hand, better communication between federal agencies and local police about serious threats is a good thing for public safety. On the other hand, whenever you centralize information gathering across the entire security apparatus, there’s always a risk of mission creep. The bill does require DHS to coordinate the assessment to ensure compliance with laws protecting individual privacy, civil rights, and civil liberties, which is a necessary safeguard, but the sheer volume of data being shared could still raise concerns for civil liberty advocates.

What You Get to See

Here’s the transparency part: DHS must post the unclassified portion of the annual assessment on its public website. This means that while the deepest, most sensitive details will likely be hidden in a classified annex—the stuff that could compromise intelligence sources or methods—the public will get a yearly, official summary of how the government views the AI terrorism threat. This is a big win for transparency, giving researchers, tech companies, and the public a clear, non-speculative look at the national security challenges posed by generative AI. It also ensures that the issue stays front and center for Congress, as the Secretary is required to brief the relevant committees after each report is submitted (SEC. 3).