PolicyBrief
H.R. 2564
119th CongressApr 1st 2025
Protect Victims of Digital Exploitation and Manipulation Act of 2025
IN COMMITTEE

This bill establishes a federal crime for the non-consensual creation or distribution of digitally forged intimate images of identifiable individuals.

Nancy Mace
R

Nancy Mace

Representative

SC-1

LEGISLATION

New Federal Law Targets AI Deepfakes: Creating Fake Intimate Images Now Carries Up to 5 Years in Prison

The new Protect Victims of Digital Exploitation and Manipulation Act of 2025 is tackling one of the nastiest problems modern technology has created: non-consensual intimate deepfakes. Simply put, this bill creates a brand-new federal crime for recklessly creating or sharing fake intimate pictures of identifiable people without their explicit permission. If you do this, and the action involves crossing state lines (which almost everything online does), you could face fines, up to five years in federal prison, or both (Sec. 2).

The Digital Forgery Crackdown

This isn't about awkward selfies; it’s about synthetic media. The law specifically targets a “digital forgery,” which is defined as an intimate picture created or altered using software or AI that looks like a genuine picture of the person. An “intimate visual depiction” means images showing genitals, the pubic area, the anus, or a female nipple, or showing sexually explicit conduct. The key here is the “identifiable individual”—meaning the person’s face, likeness, or unique features make them recognizable (Sec. 2).

If you’re thinking about the consent angle, the bill sets a high bar. Consent must be “affirmative, conscious, competent, and voluntary.” This means silence or simply not objecting doesn't count. The law makes it clear that even if the victim is a public figure, they are still protected—if they didn't freely agree to the fake image being made or shared, it’s a crime. This is a huge win for privacy, as it finally gives law enforcement a clear tool to go after the creators of this harmful content.

Who This Affects and Why It Matters

For the vast majority of people, this law is a protective shield. It gives victims—who are overwhelmingly women—a clear path to justice against a crime that is currently devastating but often falls into legal gray areas. For instance, if a disgruntled ex-partner uses AI to generate a fake intimate image of their former spouse and posts it on a social platform, that person is now clearly facing federal criminal charges, not just civil lawsuits.

However, the law isn't a blanket ban on all digital images. It includes specific, common-sense exceptions. You can still share these images in good faith with law enforcement, during a legal proceeding, or for medical education or treatment. This ensures that legitimate uses of visual data aren’t accidentally criminalized (Sec. 2).

The Internet Provider Safety Net

In a move that should reassure tech companies and small platform owners, the bill generally protects “communications services” (meaning ISPs, social media platforms, etc.) from liability. They won't be held responsible for fake content posted by a user, unless the service provider itself recklessly distributes the violating content. This is crucial: it targets the bad actors who create and post the images, not the platforms that host billions of pieces of user content (Sec. 2).

Overall, this bill is a necessary update to federal law, bringing criminal statutes into the age of AI. It gives victims real recourse and establishes a clear, strong deterrent against those who would weaponize digital technology to exploit and manipulate others. It’s tightly written and focused on criminalizing the specific, harmful act of non-consensual deepfake creation.