PolicyBrief
H.RES. 963
119th CongressDec 18th 2025
Condemning antisemitism in all its forms, including the proliferation and amplification of antisemitic content on artificial intelligence (AI) platforms, urging robust, transparent safeguards for AI, and recognizing stakeholders working to counter this threat.
IN COMMITTEE

This resolution condemns antisemitism amplified by AI, urges robust platform safeguards, and encourages stakeholder collaboration to counter this threat while upholding constitutional rights.

Sara Jacobs
D

Sara Jacobs

Representative

CA-51

LEGISLATION

AI Platforms Urged to Install 'Safety-by-Design' Guardrails to Block Antisemitic Content

This resolution is a clear, direct shot at the tech industry, specifically targeting how Artificial Intelligence (AI) and social media platforms can become unintentional (or intentional) amplifiers of antisemitism. Essentially, it condemns hate speech across the board, but focuses on the modern problem of AI systems generating or spreading antisemitic content, conspiracy theories, and calls for violence. The core message is that combating this digital hate is a national priority, and the tech companies creating these tools need to step up their game and put in real safeguards.

The Algorithm’s Hate Problem

If you’ve ever seen a chatbot go off the rails or an image generator spit out something deeply offensive, you know the problem is real. This resolution directly calls out tech companies—the developers and deployers of AI systems—to take responsibility. They are now urged to implement "strong safeguards" to prevent their systems from producing or amplifying content that is antisemitic, enables harassment, or facilitates targeted abuse. Think of it like a mandatory recall for the software: if your AI can be weaponized to target a specific group, the resolution says you have a duty to fix the design flaw. This is where the rubber meets the road for companies like Google, Meta, and OpenAI, forcing them to bake "safety-by-design" into their core models.

The Push for Transparency and Research

This isn't just about telling companies to be better; it’s about demanding the tools to check their homework. The resolution encourages the development of standards and frameworks to measure, mitigate, and govern risks related to hate speech in AI. Crucially, it calls for improved data sharing and researcher access—with privacy protections, of course—so academics and civil society groups can actually study how antisemitism spreads on these platforms. For the average user, this means that those periodic, vague transparency reports platforms put out should get a lot more specific, detailing content removal rates and the efficacy of their mitigation efforts. If they say they’re fighting hate, they need to show the numbers.

Digital Literacy Meets Constitutional Limits

One of the most practical parts of this resolution is the support for public awareness and digital literacy efforts, especially for youth. The goal is to equip users, educators, and communities to recognize and report AI-generated text, images, or audio that spreads antisemitic narratives. In an age of deepfakes, knowing how to spot AI-generated misinformation is becoming a necessary life skill, much like knowing how to spot a phishing email.

However, the resolution is careful to include a vital check: it explicitly reaffirms that all measures taken to address antisemitism must be consistent with the U.S. Constitution. This is a nod to the free speech concerns that always crop up in content moderation debates. It means that while companies are urged to block hate speech and incitement, they must also protect civil liberties, due process, and avoid discriminatory or overly broad enforcement. The challenge here is defining what, exactly, constitutes "antisemitic content" without creating vague standards that could lead to the suppression of legitimate, protected speech. The resolution itself is a balancing act, trying to protect vulnerable communities while upholding the constitutional rights of everyone else.