PolicyBrief
H.R. 2152
119th CongressMar 14th 2025
AI PLAN Act
IN COMMITTEE

The AI PLAN Act mandates a collaborative strategy among the Treasury, Homeland Security, and Commerce departments to combat financial crimes involving artificial intelligence, including fraud and misinformation, by requiring annual reports to Congress on available resources, needed resources, and legislative proposals.

Zachary (Zach) Nunn
R

Zachary (Zach) Nunn

Representative

IA-3

LEGISLATION

New Bill Orders Feds to Develop Annual Strategy Against AI Financial Crime, Including Deepfakes and Fraud

The proposed Artificial Intelligence Practices, Logistics, Actions, and Necessities Act, or AI PLAN Act, directs key federal agencies—specifically the Treasury, Homeland Security, and Commerce departments—to team up and deliver an annual game plan to Congress. This strategy, required within 180 days of the Act's potential enactment and updated yearly, focuses squarely on defending the U.S. against financial crimes powered by artificial intelligence, aiming to shield markets, individuals, businesses, and supply chains from AI-driven fraud and misinformation.

The Annual AI Defense Blueprint

Think of this as a yearly check-up mandated by Congress. The core of the AI PLAN Act is the requirement for this joint annual report. According to Section 2, this report must lay out the government's playbook, detailing interagency policies for protecting financial systems and the public. It needs to inventory the current tools (hardware, software, tech) ready for deployment against AI threats and, crucially, list the needed resources – identifying gaps in technology, personnel, and funding required to effectively combat these evolving crimes. This provides a regular assessment of whether federal agencies have what they need to tackle sophisticated AI scams.

Spotting the AI Threats: From Deepfakes to Digital Fraud

The bill explicitly calls out the kinds of advanced threats the strategy needs to address. We're talking about more than just basic phishing emails. The legislation highlights risks like 'deepfakes' (hyper-realistic fake videos or audio), 'voice cloning' (AI mimicking someone's voice, maybe for scam calls), the creation of 'synthetic identities' for fraud, and even AI used for foreign election interference or generating false market signals. Essentially, the plan needs to anticipate and counter AI being used to trick people, steal money, or destabilize financial systems in increasingly convincing ways.

Turning Plans into Protection: Recommendations for Laws and Best Practices

This isn't just about writing reports. Within 90 days after submitting each annual strategy, the same agencies must send Congress concrete recommendations. Section 2 mandates these include specific legislative proposals – ideas for new laws or updates to existing ones – to tackle the identified AI risks. They also need to develop and share best practices for American businesses and government bodies to protect themselves and respond if they are targeted by AI-driven financial crime. This could translate into practical guidance for companies on spotting AI-generated fake invoices or new regulations designed to curb the misuse of AI in financial markets.