PolicyBrief
S. 2714
119th CongressSep 4th 2025
CHAT Act
IN COMMITTEE

The CHAT Act mandates strict age verification, parental consent, and safety monitoring for companion AI chatbots interacting with minors, enforced by the FTC and state attorneys general.

Jon Husted
R

Jon Husted

Senator

OH

LEGISLATION

New CHAT Act Mandates Age Verification and Parental Consent for All AI Companions

The Children Harmed by AI Technology Act, or the CHAT Act, is dropping a major compliance bomb on the companies running those AI chatbots designed to be your friend, therapist, or emotional support system. This bill is a direct response to concerns about minors interacting with AI companions, and it basically says: no more anonymous chatting, especially if you’re under 18.

The New Gatekeepers: Age Checks and Account Freezes

If you use a “companion AI chatbot”—meaning software designed to simulate a personal relationship or offer emotional support—get ready for some friction. Section 3 mandates that every single user must create an account and submit to age verification using a “commercially available method designed to be accurate.” If you have an existing account when this law takes effect (which is one year after enactment, per Section 7), that account gets frozen until you prove your age. For adults, this means a mandatory identity check just to keep chatting with your digital buddy. For the companies, this means a significant cost increase and a huge technical lift to implement reliable age verification across the board. If that verification method is expensive or clunky, it could easily limit access for regular users.

The Parent Trap: Mandatory Monitoring for Minors

This is where the bill gets serious about child safety. If the age check flags a user as a minor, the company must link that account to a verified parental account and obtain verifiable parental permission before the minor can use the service. But the biggest change is the mandatory surveillance. Section 3 requires companies to "actively monitor" the minor’s conversation for "suicidal ideation." If the AI detects a minor expressing thoughts of self-harm, the company must immediately notify the parent and provide the contact information for the National Suicide Prevention Lifeline. While the intent is clear—protecting vulnerable kids—this mandatory monitoring is a massive, privacy-invasive requirement that puts the onus of mental health crisis intervention squarely on tech companies. Furthermore, minors are completely blocked from engaging in any "sexually explicit communication" with the chatbot.

Data Limits and Transparency Pop-Ups

The CHAT Act is very strict about what companies can do with all that new age data they collect. Section 3 limits the collection and use of this verification data strictly to age checking, obtaining consent, and compliance records. They can’t just turn it into marketing material. In a nod to transparency, the bill also mandates a “popup” notification at the start of every chat session, and then again at least every 60 minutes, clearly informing the user that they are interacting with an artificial chatbot, not a human. This is a good, simple rule that ensures users are never confused about who—or what—they are talking to, even if that pop-up every hour is going to get annoying.

Enforcement and the Good Faith Shield

Enforcement is handled by the Federal Trade Commission (FTC), which can treat violations like unfair or deceptive business practices (Section 5). State Attorneys General can also sue on behalf of their residents to stop violations or seek damages. However, the bill includes a “Safe Harbor” (Section 6) for companies. If a company can prove they relied in good faith on the age information provided, followed the FTC’s compliance guidelines, and used industry-standard age verification methods, they won't be held liable for a violation. This shield is important because it acknowledges that even the best systems aren't foolproof, rewarding companies that make a genuine effort to comply with these complex new rules.