PolicyBrief
S. 1396
119th CongressApr 9th 2025
Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025
IN COMMITTEE

The "Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025" aims to combat the spread of deepfakes and synthetic media by establishing standards for content provenance, requiring tools to enable content provenance information, and prohibiting the removal or alteration of such information.

Maria Cantwell
D

Maria Cantwell

Senator

WA

LEGISLATION

Proposed Bill Mandates Labels on AI-Generated Content, Restricts Training Data Use Starting in Two Years

This legislation, the Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025, dives headfirst into the tricky world of AI-generated content, often called 'synthetic media' or 'deepfakes'. Its main goal is to create more transparency online by setting up systems to label digital content (images, video, audio, text) that's been created or significantly altered by AI. Starting two years after the bill potentially becomes law, companies providing tools to make this kind of content would need to offer users a way to embed clear, machine-readable 'content provenance information' – basically, a digital tag saying "AI made this" or "AI changed this," based on standards yet to be developed.

Digital Watermarks: The Plan to Tag AI Content

The core idea revolves around developing industry standards for watermarking and embedding 'content provenance information'. Think of it like a digital fingerprint indicating a file's origin and history. The National Institute of Standards and Technology (NIST) is tasked with leading a public-private group to figure out the best ways to do this, making sure these digital tags are hard to remove or tamper with (Section 4). The bill also mandates NIST research into detection tech and a public education campaign to help everyday folks understand what these labels mean (Section 5). For anyone selling tools that create or modify synthetic content, Section 6(a) requires them to enable users to include this provenance info and implement security to keep it intact if the user opts in.

Keeping Labels On and AI Training in Check

A big part of this bill focuses on what can't be done with these digital labels. Section 6(b) makes it illegal to knowingly remove or tamper with content provenance information if the goal is to engage in 'unfair or deceptive business practices' – a term that might need further clarification down the line. Large online platforms (think social media sites or search engines meeting specific revenue or user thresholds) are explicitly barred from stripping or hiding this information, with a narrow exception carved out for legitimate security research.

Perhaps one of the most significant parts is Section 6(b)(4), which tackles how AI systems learn. It prohibits using 'covered content' – original works like articles, photos, music – that has provenance information (or where it's been illegally removed) to train AI models without getting express, informed consent from the content owner. This could also involve paying the owner, essentially putting the brakes on scraping copyrighted material willy-nilly to feed AI development.

Who Makes Sure This Happens?

Enforcement gets a three-pronged approach under Section 7. The Federal Trade Commission (FTC) gets the power to go after violators as if they were breaking FTC rules. State attorneys general can also bring civil lawsuits in federal court to stop violations and get damages for their residents. Finally, individual content owners whose provenance information is improperly removed or whose content is misused for AI training without consent can sue directly for damages, legal fees, and court orders to stop the misuse. This multi-layered enforcement aims to give the rules some teeth, but how effectively it works will depend on how agencies and courts interpret potentially broad terms like 'unfair or deceptive practices'. The practicalities of tracking content usage and proving violations, especially across the vast scale of the internet, remain significant hurdles.