The AI Fraud Accountability Act establishes federal criminal and civil penalties for using AI-generated digital impersonations to commit fraud while creating a collaborative framework to develop detection and prevention standards.
Vern Buchanan
Representative
FL-16
The AI Fraud Accountability Act establishes federal criminal penalties and FTC enforcement authority against the use of AI-generated digital impersonations to commit fraud. The bill also mandates the creation of a multi-sector working group to develop best practices for detecting and preventing such fraud. Additionally, it promotes international cooperation to combat cross-border digital impersonation schemes while explicitly protecting First Amendment rights, including parody and satire.
The AI Fraud Accountability Act creates a new federal crime specifically for using 'digital impersonations'—think AI-generated voices or deepfake videos—to scam people out of money or sensitive info. Under Section 2, if someone uses software to create a likeness of a real person (or even a fake person that looks real) to defraud you, they face up to 3 years in prison and the total forfeiture of any property or cash they gained from the scheme. It’s a direct response to those terrifying calls where a scammer uses a cloned version of a relative's voice to beg for emergency cash, or high-tech 'boss scams' where a fake video of a CEO orders an urgent wire transfer.
While the criminal side handles the bad actors, Section 3 gives the Federal Trade Commission (FTC) the teeth to treat these AI scams as 'unfair or deceptive acts.' This means the FTC can go after companies or entities involved in these practices using their full range of civil penalties. For the average person, this adds a layer of consumer protection that didn't exist in the pre-AI era. If you're a small business owner who gets hit by a sophisticated digital impersonation, there’s now a specific legal framework for the government to step in and investigate the fraud as a regulatory violation, not just a random police matter.
Because AI moves faster than most laws, Section 4 of the bill orders the National Institute of Standards and Technology (NIST) to pull together a 'Working Group' within 30 days. This isn't just a group of bureaucrats; it includes experts from social media platforms, banks, and cybersecurity firms. Their job is to figure out the best ways to actually catch these fakes. For you, this might eventually mean better 'deepfake filters' on your phone or more secure verification steps at your bank. They are required to publish their first set of best practices within a year and update them annually for a decade to keep up with evolving tech.
Scammers often hide overseas, so Section 5 requires the FTC and DOJ to identify the top 10 countries where these AI frauds originate and negotiate international law enforcement deals to hunt them down. However, the bill isn’t looking to kill your favorite meme or parody account. Section 6 includes a 'Savings Clause' that explicitly protects the First Amendment. This means satire, journalism, and parody are safe from these new fraud rules. The bill draws a sharp line: if you’re making a funny parody video of a celebrity, you’re fine; if you’re using that same tech to trick a grandmother into giving up her Social Security number, the feds are coming for you.