The CHAT Act mandates strict age verification, parental consent, and safety monitoring for companion AI chatbots interacting with minors, enforced by the FTC and state attorneys general.
Jon Husted
Senator
OH
The Children Harmed by AI Technology (CHAT) Act mandates strict age verification and parental consent for minors accessing companion AI chatbots, which are defined as AI designed for emotional support. Covered entities must monitor these chats for suicidal ideation and sexually explicit content, immediately notifying parents and providing crisis resources when necessary. Furthermore, companies must clearly disclose to users that they are interacting with an AI, not a human, at the start of and periodically during every chat session.
The Children Harmed by AI Technology Act, or the CHAT Act, is dropping a major compliance bomb on the companies running those AI chatbots designed to be your friend, therapist, or emotional support system. This bill is a direct response to concerns about minors interacting with AI companions, and it basically says: no more anonymous chatting, especially if you’re under 18.
If you use a “companion AI chatbot”—meaning software designed to simulate a personal relationship or offer emotional support—get ready for some friction. Section 3 mandates that every single user must create an account and submit to age verification using a “commercially available method designed to be accurate.” If you have an existing account when this law takes effect (which is one year after enactment, per Section 7), that account gets frozen until you prove your age. For adults, this means a mandatory identity check just to keep chatting with your digital buddy. For the companies, this means a significant cost increase and a huge technical lift to implement reliable age verification across the board. If that verification method is expensive or clunky, it could easily limit access for regular users.
This is where the bill gets serious about child safety. If the age check flags a user as a minor, the company must link that account to a verified parental account and obtain verifiable parental permission before the minor can use the service. But the biggest change is the mandatory surveillance. Section 3 requires companies to "actively monitor" the minor’s conversation for "suicidal ideation." If the AI detects a minor expressing thoughts of self-harm, the company must immediately notify the parent and provide the contact information for the National Suicide Prevention Lifeline. While the intent is clear—protecting vulnerable kids—this mandatory monitoring is a massive, privacy-invasive requirement that puts the onus of mental health crisis intervention squarely on tech companies. Furthermore, minors are completely blocked from engaging in any "sexually explicit communication" with the chatbot.
The CHAT Act is very strict about what companies can do with all that new age data they collect. Section 3 limits the collection and use of this verification data strictly to age checking, obtaining consent, and compliance records. They can’t just turn it into marketing material. In a nod to transparency, the bill also mandates a “popup” notification at the start of every chat session, and then again at least every 60 minutes, clearly informing the user that they are interacting with an artificial chatbot, not a human. This is a good, simple rule that ensures users are never confused about who—or what—they are talking to, even if that pop-up every hour is going to get annoying.
Enforcement is handled by the Federal Trade Commission (FTC), which can treat violations like unfair or deceptive business practices (Section 5). State Attorneys General can also sue on behalf of their residents to stop violations or seek damages. However, the bill includes a “Safe Harbor” (Section 6) for companies. If a company can prove they relied in good faith on the age information provided, followed the FTC’s compliance guidelines, and used industry-standard age verification methods, they won't be held liable for a violation. This shield is important because it acknowledges that even the best systems aren't foolproof, rewarding companies that make a genuine effort to comply with these complex new rules.