PolicyBrief
S. 1396
119th CongressApr 9th 2025
Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025
IN COMMITTEE

This Act establishes standards for content provenance information and detection methods to protect the integrity of digital media against unauthorized synthetic and deepfaked content.

Maria Cantwell
D

Maria Cantwell

Senator

WA

LEGISLATION

Proposed Law Demands Digital Watermarks on AI-Generated Content, Threatens Lawsuits for Tampering

This proposed legislation, the Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025, is a major move to tackle the growing problem of deepfakes and AI-generated media. Essentially, it aims to create a digital paper trail—called "content provenance information"—for digital content, making it easier to tell if an image, video, or text was created or altered by an algorithm.

Starting two years after enactment, commercial software used to create or significantly modify digital content must give users the option to attach this secure, machine-readable digital label. Think of it as a mandatory, tamper-proof digital watermark that says, "I was made by AI." The goal is simple: if you’re scrolling through social media, you should have a way to know if that shocking video is real or if it was cooked up by a generative model.

The AI Training Speed Bump

Here’s where this bill hits AI developers right in the data pipeline. Under Section 6(c), if a piece of content has this new provenance label attached—or if you know its label was illegally stripped off—you cannot use it to train an AI system or generate new synthetic content unless you get express, informed consent from the original content owner. On top of that, you have to follow any terms of use, which often means paying up if the owner requires compensation for using their copyrighted work.

For AI companies, especially those building the next generation of models, this is a massive change. Right now, many models are trained on vast, often unsorted, datasets scraped from the internet. This provision means that any future dataset containing labeled content will require meticulous licensing and payment, creating a huge administrative and financial hurdle. It’s a win for artists and creators seeking payment for their work, but it could seriously slow down the pace of AI innovation that relies on massive, unrestricted data access.

Who’s Policing the Digital Watermarks?

This bill doesn't mess around when it comes to enforcement. It gives power to three major groups. First, the Federal Trade Commission (FTC) can treat violations—like companies failing to provide the labeling tools or platforms tampering with the labels—as an unfair or deceptive business practice, which carries significant fines.

Second, state attorneys general can sue on behalf of their residents to stop violations and recover damages. This creates a powerful state-level check against bad actors. Third, and most importantly for the average content creator, the bill creates a private right of action (Section 7(c)). If you own content and someone illegally strips off your provenance label or uses your labeled content for AI training without permission, you can sue them directly for damages, legal costs, and attorney’s fees. This puts real teeth into the labeling requirements.

The Platform Loophole

While the bill generally prohibits "Covered Platforms"—big players like major social media sites with over $50 million in annual revenue or 25 million monthly users—from removing or altering provenance information, there’s an interesting exception in Section 6(b)(2). A platform can remove or alter the label if it is "necessary, proportionate, and limited to performing security research."

This is a potential gray area. While the intent might be to allow platforms to test their defenses against tampering, the language is broad. It could potentially be interpreted to allow large platforms to temporarily strip identifying information under the guise of security testing, raising questions about transparency and accountability. For the rest of us, it means trusting that the biggest players won't use that exception to dodge the spirit of the law.

The Bottom Line for Busy People

If you consume news, watch videos, or follow social media trends, this bill is trying to restore some integrity to your feed. It’s an attempt to give you a tool—the digital watermark—to verify what you see. If you’re a creator, it offers a new layer of protection and a path to compensation for your work being used to train AI. If you work in tech, it means new compliance rules and a major headache for sourcing training data. It’s a necessary step toward digital accountability, but one that will certainly change how AI is built and deployed in the U.S.