PolicyBrief
S. 3495
119th CongressDec 16th 2025
Artificial Intelligence Scam Prevention Act
IN COMMITTEE

This Act prohibits the use of artificial intelligence to impersonate individuals for fraudulent purposes in commerce and strengthens enforcement against AI-enabled scams across telemarketing and text messaging.

Amy Klobuchar
D

Amy Klobuchar

Senator

MN

LEGISLATION

New AI Scam Act Bans Deepfake Voice Impersonation and Requires Disclosure in Telemarketing

The Artificial Intelligence Scam Prevention Act is a direct response to the rising tide of AI-enabled fraud, which Congress notes stole over $1.4 billion in 2024 alone. Simply put, this bill makes it illegal to use AI to replicate someone’s image or voice with the intent to defraud, and it significantly broadens the legal reach of federal agencies to police scams delivered via text message and video conference.

Your Voice is Now Protected

Section 3 gets right to the point: it’s now a deceptive act in commerce to use AI to replicate an individual’s image or voice with the intent to defraud. This covers the chilling “deepfake” scams where criminals clone a voice—say, pretending to be your boss, a family member, or a government official—to trick you into sending money. The Federal Trade Commission (FTC) is tasked with enforcing this, using the same powerful tools they use against other unfair and deceptive practices. This provision explicitly covers impersonating a government or business official, closing a critical loophole that AI technology has recently opened for scammers.

The Text Message and Video Conference Loophole Closes

If you’ve ever wondered why the laws governing robocalls didn't seem to cover the endless stream of spam texts you get, Section 4 is the answer. This section updates two major pieces of consumer protection law—the Telemarketing and Consumer Fraud and Abuse Prevention Act and the Communications Act of 1934 (the robocall law)—to explicitly include text messages and video conference calls within their regulatory scope. For you, this means the rules governing annoying, illegal calls now apply equally to your inbox, whether it’s SMS, MMS, or even app-to-person messaging.

The AI Disclosure Rule: No More Robot Secrets

One of the biggest real-world changes is the new disclosure requirement in Section 4. If a telemarketer makes a call or sends a text message using artificial intelligence to emulate a human being, they must promptly and clearly disclose that AI is being used. Think of it like this: if you answer a call and hear a voice that sounds human, but it’s actually a sophisticated chatbot, they have to tell you immediately. The bill also expands the definition of “artificial or prerecorded voice” to include any machine-generated speech that appears to authentically depict a person who didn't actually say those words, making it clear that deepfake voices are covered under existing robocall rules.

The Challenge of ‘Substantial Assistance’

While the bill is focused on catching the bad guys, there’s a provision that could affect technology platforms and service providers. Section 3 also prohibits providing “substantial assistance” to someone committing these AI fraud acts, if the provider “knows or should reasonably know” about the violation. This is where things get a bit vague. For example, if a large telecom company or an AI voice cloning service sees suspicious activity, how much responsibility do they have to stop it? The term “reasonably know” is a legal standard that will likely be tested, potentially making it harder for platforms to claim ignorance when their tools are used for fraud.

Better Reporting and Coordination

To ensure enforcement keeps up with technology, Section 5 establishes an Artificial Intelligence Scams Advisory Group composed of leaders from the FTC, FCC, Treasury, and Justice Department, alongside industry and consumer advocates. This group will work for five years to identify gaps in current scam prevention training and create model materials for businesses and financial institutions. The FTC is also required to update its public web portal immediately to include the latest information on AI-enabled scams, searchable by region, and to specifically target public awareness materials toward seniors, a population often hit hardest by these sophisticated scams.