The "Generative AI Terrorism Risk Assessment Act" mandates annual assessments by the Department of Homeland Security on how terrorist groups might use generative AI to threaten the U.S., along with recommendations to counter these threats.
August Pfluger
Representative
TX-11
The Generative AI Terrorism Risk Assessment Act requires the Department of Homeland Security to produce annual assessments for the next five years on how terrorist groups might use generative AI to threaten the U.S., and recommend how to counter these threats. These assessments will analyze terrorist groups' use of generative AI for spreading extremist messages, radicalizing or recruiting individuals, or enhancing their ability to develop weapons. The assessments will be submitted in an unclassified format, with a classified annex if needed, and shared with relevant congressional committees, state and local fusion centers, and the National Network of Fusion Centers.
The Generative AI Terrorism Risk Assessment Act directs the Department of Homeland Security (DHS) to keep tabs on how terrorist groups are using – or might use – artificial intelligence tools like ChatGPT to threaten the U.S. This means the DHS will produce yearly reports analyzing these threats and figuring out ways to counter them, for the next five years.
This bill focuses on how terrorist organizations might exploit generative AI – the tech behind tools that can create realistic text, images, and even videos. Think deepfakes, automated propaganda, or even potentially aiding in weapon development. The bill specifically mentions chemical, biological, radiological, and nuclear (CBRN) weapons in Section 3(b)(1)(C).
For example imagine a terrorist group using AI to generate thousands of personalized, convincing recruitment messages, targeting vulnerable individuals online. Or, they could create fake news reports to spread disinformation and incite violence. The DHS will now be tasked with analyzing these kinds of scenarios.
The first report is due within 180 days of the bill's enactment (Section 3(a)). Each report will cover the past year's incidents and look ahead at potential threats. The law, in Section 3(b)(2), requires the DHS to develop recommendations to counter the threats. This could range from developing AI detection tools to working with tech companies to limit the misuse of their platforms.
The bill emphasizes collaboration. The DHS won't be going it alone. They'll coordinate with other agencies within the department (Section 3(c)) and with state and local "fusion centers" (Section 3(f)) – hubs where information about potential threats is shared and analyzed. It is all hands on deck.
These reports won't be top-secret. The bill mandates that unclassified versions be submitted to Congress and posted on the DHS website (Section 3(d)). If there's sensitive information that can't be made public, a classified annex will be included. Within 30 days of submitting each report, the Secretary of Homeland Security will brief the relevant congressional committees (Section 3(e)).