The AI Fraud Accountability Act of 2026 establishes federal criminal and civil penalties for using AI-generated digital impersonations to commit fraud while mandating the development of national best practices to detect and prevent such deception.
Tim Sheehy
Senator
MT
The AI Fraud Accountability Act of 2026 establishes federal criminal and civil penalties for using AI-generated digital impersonations to commit fraud. The bill mandates the creation of a NIST-led working group to develop best practices for detecting and preventing such fraud while fostering international cooperation to combat these crimes. It also includes a savings clause to protect First Amendment rights, such as parody, satire, and journalism.
The AI Fraud Accountability Act of 2026 is a direct response to the rise of 'deepfakes'—those eerily convincing AI-generated videos and voice clips that can make anyone look or sound like they’re saying something they never did. This bill officially makes it a federal crime to use digital impersonations to trick people out of money, documents, or anything of value. Specifically, it targets any tech-altered depiction that is 'indistinguishable' from a real person to a reasonable observer. If someone uses a clone of your boss’s voice to trick you into wiring company funds, or a fake video of a relative to scam you out of a 'bail payment,' they could face up to 3 years in prison and be forced to hand over any property or gear used to pull off the stunt.
Beyond just making these scams illegal, the bill puts some muscle behind the rules by handing enforcement power to the Federal Trade Commission (FTC). Under Section 3, using a digital impersonation to defraud someone is treated as an 'unfair or deceptive act,' allowing the FTC to go after scammers with the same tools they use to fight telemarketers and predatory lenders. For a small business owner who gets hit by a high-tech invoice scam, this means there’s a specific federal agency tasked with tracking the bad actors down. To keep up with how fast AI moves, the bill also orders the National Institute of Standards and Technology (NIST) to pull together a 'Working Group' of tech experts, social media platforms, and law enforcement. Their job is to build a playbook of best practices for spotting and stopping these fakes before they hit your inbox.
One of the biggest headaches with digital fraud is that the person pressing the buttons is often thousands of miles away. Section 5 of the bill tries to close that gap by requiring the FTC and the Department of Justice to identify the top 10 foreign countries where these AI scams originate. Within 90 days, they have to start naming names and then work on formal cooperation agreements with those governments to help extradite or prosecute scammers. This means if a criminal organization in a 'scam hub' country is targeting American retirees with AI-generated videos, the U.S. government now has a legal mandate to push that country’s local police to help shut them down.
With any law involving 'fake' media, there is always a worry about where the line is drawn for comedy or news. The bill includes a specific 'Savings Clause' in Section 6 to protect the First Amendment. This ensures that the law only bites if there is an actual 'intent to defraud.' If you are a YouTuber making a parody video or a journalist using AI to illustrate a point, you aren't the target here. The focus is strictly on the 'bad actors'—criminal organizations and individual fraudsters who use digital masks to steal. By focusing on the intent to steal money or property rather than the technology itself, the bill attempts to protect your wallet without accidentally silencing your favorite satirist.