The "Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025" aims to combat the spread of deepfakes and synthetic media by establishing standards for content provenance, requiring tools to enable content provenance information, and prohibiting the removal or alteration of such information.
Maria Cantwell
Senator
WA
The "Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025" aims to combat the spread of deepfakes and misinformation by establishing standards for content provenance, requiring tools to enable users to include provenance information, and prohibiting the removal or alteration of this information. It establishes a public-private partnership to develop standards for watermarking and detecting synthetic content, and directs the National Institute of Standards and Technology to conduct research and public education on these technologies. The bill also prohibits using content with provenance information to train AI systems without consent and provides enforcement mechanisms through the FTC and state attorneys general, as well as a private right of action for content owners.
This legislation, the Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025, dives headfirst into the tricky world of AI-generated content, often called 'synthetic media' or 'deepfakes'. Its main goal is to create more transparency online by setting up systems to label digital content (images, video, audio, text) that's been created or significantly altered by AI. Starting two years after the bill potentially becomes law, companies providing tools to make this kind of content would need to offer users a way to embed clear, machine-readable 'content provenance information' – basically, a digital tag saying "AI made this" or "AI changed this," based on standards yet to be developed.
The core idea revolves around developing industry standards for watermarking and embedding 'content provenance information'. Think of it like a digital fingerprint indicating a file's origin and history. The National Institute of Standards and Technology (NIST) is tasked with leading a public-private group to figure out the best ways to do this, making sure these digital tags are hard to remove or tamper with (Section 4). The bill also mandates NIST research into detection tech and a public education campaign to help everyday folks understand what these labels mean (Section 5). For anyone selling tools that create or modify synthetic content, Section 6(a) requires them to enable users to include this provenance info and implement security to keep it intact if the user opts in.
A big part of this bill focuses on what can't be done with these digital labels. Section 6(b) makes it illegal to knowingly remove or tamper with content provenance information if the goal is to engage in 'unfair or deceptive business practices' – a term that might need further clarification down the line. Large online platforms (think social media sites or search engines meeting specific revenue or user thresholds) are explicitly barred from stripping or hiding this information, with a narrow exception carved out for legitimate security research.
Perhaps one of the most significant parts is Section 6(b)(4), which tackles how AI systems learn. It prohibits using 'covered content' – original works like articles, photos, music – that has provenance information (or where it's been illegally removed) to train AI models without getting express, informed consent from the content owner. This could also involve paying the owner, essentially putting the brakes on scraping copyrighted material willy-nilly to feed AI development.
Enforcement gets a three-pronged approach under Section 7. The Federal Trade Commission (FTC) gets the power to go after violators as if they were breaking FTC rules. State attorneys general can also bring civil lawsuits in federal court to stop violations and get damages for their residents. Finally, individual content owners whose provenance information is improperly removed or whose content is misused for AI training without consent can sue directly for damages, legal fees, and court orders to stop the misuse. This multi-layered enforcement aims to give the rules some teeth, but how effectively it works will depend on how agencies and courts interpret potentially broad terms like 'unfair or deceptive practices'. The practicalities of tracking content usage and proving violations, especially across the vast scale of the internet, remain significant hurdles.