PolicyBrief
H.R. 6489
119th CongressDec 5th 2025
SAFE BOTs Act
IN COMMITTEE

The SAFE BOTs Act establishes federal requirements for chatbots interacting with minors, including mandatory disclosures, content restrictions, and policies to protect their well-being.

Erin Houchin
R

Erin Houchin

Representative

IN-9

LEGISLATION

SAFE BOTs Act Mandates AI Disclosures and Crisis Hotlines for Users Under 17, Effective Next Year

The Safeguarding Adolescents From Exploitative BOTs Act (SAFE BOTs Act) is the federal government’s first major attempt to put guardrails around how AI chatbots interact with users under 17 years old. If you have kids who use these tools, or you work in the tech space, this bill is setting some crucial new rules of the road.

The Friend Who Isn't Real: Mandatory Disclosures

This bill cuts straight to the reality that some kids might treat a chatbot like a person or even a therapist. The core requirement here is transparency: Chatbot providers cannot allow their AI to claim it’s a licensed professional. More importantly, they must clearly and conspicuously disclose to any minor user that the chatbot is an artificial intelligence system and not a natural person.

This disclosure has to happen the first time the minor interacts with the bot, and again anytime the user asks the bot if it’s an AI. Think of it as a mandatory, age-appropriate disclaimer that pops up before the conversation starts. For parents, this means a little less worry about their kids mistaking an algorithm for a real, qualified adult. The bill also specifies that if a minor prompts the chatbot about suicide or suicidal thoughts, the bot must immediately provide resources for a crisis intervention hotline. This is a critical safety net, using the AI interaction as a trigger for real-world help.

Setting Boundaries: Time Limits and Content Filters

Beyond identity and crisis response, the SAFE BOTs Act tackles engagement time and harmful content. Providers must implement policies to advise a user to take a break after a continuous, uninterrupted interaction has lasted for three hours. While three hours is a long time, this is a clear attempt to curb potentially unhealthy usage patterns. It’s the digital equivalent of a parent telling their kid to go outside.

The bill also forces providers to create and maintain policies to address—for minor users—sexual material harmful to minors (including child pornography), gambling, and the distribution, sale, or use of illegal drugs, tobacco, or alcohol. This means the AI models must be trained and filtered to prevent them from generating or encouraging conversations around these topics for users under 17. For the average provider, this means a significant investment in content moderation and filtering technology.

Who’s Checking the Fine Print?

The Federal Trade Commission (FTC) is tasked with enforcing these new requirements, treating violations as an unfair or deceptive act. State attorneys general can also bring civil actions, though they must notify the FTC first. This dual enforcement mechanism gives the law teeth, ensuring that providers who willfully disregard the fact that their users are minors (the bill’s definition of a “Covered User”) can be held accountable.

However, there’s a catch for states: this federal law overrides any state or local laws that cover the exact same matters. While the intent is to create a consistent national standard, this preemption clause could potentially nullify stronger local protections already in place. The requirements take effect one year after the law is enacted, giving the tech industry time to recalibrate their systems.

Finally, the bill mandates a four-year longitudinal study led by the National Institutes of Health (NIH) to evaluate the actual risks and benefits of chatbots on minor mental health, specifically looking at loneliness, anxiety, and suicidal ideation. This is smart policy, ensuring that future regulatory moves will be based on scientific data rather than just guesswork.