This Act amends platform liability protections to mandate specific processes for addressing cyberstalking and intimate privacy violations, including nonconsensual deepfakes, and strengthens the TAKE IT DOWN Act's removal procedures.
Jake Auchincloss
Representative
MA-4
The Deepfake Liability Act amends Section 230 to impose a duty of care on online platforms regarding cyberstalking and intimate privacy violations, including nonconsensual deepfakes. This protection is conditional on platforms implementing reasonable processes for prevention, data logging, and content removal. The bill also strengthens the TAKE IT DOWN Act by expanding its notice and removal process to cover these specific violations within 48 hours.
The new “Deepfake Liability Act” is aiming squarely at some of the worst corners of the internet—specifically, nonconsensual intimate images, sexually explicit deepfakes, and cyberstalking. At its core, this legislation attempts to force major online platforms to clean up their act or lose the crucial legal shield they currently enjoy. It does this by creating a new "duty of care" for platforms regarding these specific, harmful types of content (SEC. 2).
What this means in practice is a conditional erosion of Section 230 of the Communications Act, the law that generally protects platforms from liability for content posted by users. Under this new act, that protection only holds up if a platform implements a “reasonable process” to prevent intimate privacy violations and cyberstalking. This process has to include minimum data logging requirements to preserve evidence for legal cases and, critically, a clear process for victims to report and remove content under the updated TAKE IT DOWN Act (SEC. 2).
The biggest change for users—and the biggest headache for platforms—is the expanded and accelerated takedown requirement. The bill expands the existing notice-and-takedown process to cover both “intimate privacy violations” (which includes deepfakes) and “content relating to cyberstalking.” If you are a “covered individual” (the person depicted or targeted), you can submit a valid removal request to a “covered platform” (social media sites, apps, etc., but not email or messaging services) (SEC. 3).
Once a platform receives a valid request—which requires a statement under penalty of perjury—they must remove the content as soon as possible, but no later than 48 hours after receiving the request. They also have to make “reasonable efforts” to find and remove any known identical copies. For the victim of a deepfake or cyberstalking campaign, this 48-hour deadline is a huge win, offering a quick path to stop the spread of deeply damaging material. For platforms, especially smaller ones, this creates a major compliance burden and a tight window to investigate claims, potentially forcing them to err on the side of removal to avoid losing their liability shield (SEC. 3).
This act also updates the criminal prohibitions against sexually explicit digital forgeries, defining them as intimate visual depictions that have been so skillfully manipulated they are “virtually indistinguishable from an authentic visual depiction” (SEC. 3). This is a necessary update to keep pace with AI technology, making it easier to prosecute the creation and sharing of these sophisticated fakes. The bill also explicitly states that platforms are not liable if they remove content in “good faith,” even if that content is later found to be lawful (SEC. 3).
This is where the trade-off gets tricky. While the goal is to protect victims, the threat of losing Section 230 immunity combined with the tight 48-hour deadline could lead to what’s called a “chilling effect.” If a platform is under pressure to remove content quickly to avoid a lawsuit, they might remove lawful or borderline content just to be safe. For instance, if a piece of satirical content or political commentary is falsely reported as cyberstalking, the platform might take it down first and ask questions later. This could potentially suppress protected speech, even though the bill includes a clause stating it should not infringe upon First Amendment rights (SEC. 4).