PolicyBrief
H.R. 7218
119th CongressJan 22nd 2026
CHAT Act
IN COMMITTEE

The CHAT Act establishes federal protections for minors interacting with companion AI chatbots by prohibiting sexually explicit content, mandating crisis resource notifications for suicidal ideation, and requiring age verification and parental consent for users under 18.

Michael Lawler
R

Michael Lawler

Representative

NY-17

LEGISLATION

New CHAT Act Mandates Age Verification and Suicide Prevention Alerts for AI Companion Apps

The CHAT Act is stepping into the digital 'wild west' of AI companionship by requiring companies to verify the age of every single user and strictly prohibiting bots from having sexually explicit conversations with minors. Under this bill, any AI designed for friendship or emotional support—think of those apps that offer virtual boyfriends, girlfriends, or digital besties—must freeze existing accounts until users prove their age. If a user is under 18, the app is required to link that account to a verified parent, get their consent first, and block any bot that talks dirty. It’s a major shift for tech companies that have largely operated on the honor system until now, and it kicks in officially one year after the bill is signed.

Digital Guardrails for Mental Health

One of the most significant changes involves how these bots handle heavy topics like self-harm. If a minor mentions suicidal thoughts to an AI companion, the bill requires the software to immediately trigger a popup notification with a direct link to the 988 Suicide & Crisis Lifeline. This isn't just a suggestion; the notification must be 'clear and conspicuous' and explicitly state that the bot is not a replacement for a real therapist. For a parent, this means if your teenager is having a dark moment with a digital friend, the company is legally obligated to notify your linked account immediately so you aren't left in the dark about a potential crisis.

The Reality Check: No, It’s Not a Human

To keep users grounded in reality, the bill requires these apps to remind you exactly what you’re talking to. Every interaction must start with a notification that the bot is AI, and that reminder has to pop up again every 60 minutes. This is designed to prevent 'emotional blurring,' where users—especially kids—might forget they are interacting with code rather than a person. For the companies running these services, this means a lot more 'fine print' and frequent interruptions in the user experience, but for the public, it’s a move toward transparency in how these algorithms are marketed and used.

Verification Hassles and Privacy Trade-offs

While the safety goals are clear, the practical rollout will likely mean more friction for regular users. To comply with Section 3, companies have to use 'commercially available, reasonably accurate' methods to verify age for everyone, not just kids. This might mean you’ll have to upload an ID or use a third-party verification service just to keep using an app you already have. The bill does include a 'Safe Harbor' clause, which protects companies from being sued if they act in good faith and follow FTC guidelines. However, for the average person, this likely means another layer of data sharing and more 'I am not a robot' style hurdles before you can get back to your digital conversation.