The CHATBOT Act establishes rules, parental controls, and advertising restrictions for AI chatbot companies to protect children and teens.
Ted Cruz
Senator
TX
The CHATBOT Act establishes comprehensive rules for artificial intelligence chatbot providers, focusing heavily on protecting children and teens. It mandates family account requirements for users under 13 and verifiable parental consent for teens (13-17) accessing these services. Furthermore, the Act prohibits targeted advertising based on the personal data of minors and requires robust parental controls within designated family accounts.
Alright, let's talk about the new CHATBOT Act, officially known as the Children’s Health, Advancement, Trust, Boundaries, and Oversight in Technology Act. This isn't just another piece of tech jargon; it's a serious attempt to rein in how AI chatbots interact with kids and teens. Basically, if a company's main gig is providing an AI chatbot, they're now on the hook for some pretty significant changes, especially when it comes to anyone under 18. The big picture? More control for parents, less data mining of young users, and a clearer understanding that you're talking to a bot, not a buddy.
Starting a year after this bill becomes law, if your kid (under 13) wants to use an AI chatbot, it's not as simple as just signing up anymore. The CHATBOT Act, specifically Section 3, requires these companies to set up a 'family account.' If a child already has an account and doesn't get looped into a family account, that account gets the axe. For teens (ages 13 to 17), companies will need verifiable parental consent before they can even create an account. Think of it like getting permission slips for a field trip, but for the digital world. And if a parent revokes that consent, poof, the account is suspended or deleted. When an account is terminated, the company has to delete all personal data, though they'll keep a copy available for 90 days if the user or parent asks for it. This means less digital breadcrumb trailing for your kids and teens, which is a pretty big deal for privacy.
This bill really leans into parental empowerment. Section 5 outlines a whole suite of parental controls for these family accounts. Parents will be able to set time limits on chatbot use – perfect for those trying to manage screen time. They can also turn off those annoying rewards or badges that chatbots use to keep kids hooked, disable notifications, and even block financial transactions. Ever worried about what your kid is chatting about? Parents will get full access to conversation records and activity, plus alerts if their child tries to bypass any settings. The default for all these controls? The most protective option available. So, if you don't touch a setting, it's already set to maximum safety, which is a nice touch for busy parents who might not dive deep into every menu.
One of the most impactful changes for young users is the outright ban on targeted advertising. Section 6 explicitly prohibits companies from using the personal data of children (under 13) or teens (13-17) to push personalized ads. This means no more chatbots learning your kid's favorite cartoon character and then showing them ads for toys related to it. While companies can still show age-appropriate ads based on a user's general age, they can't dig into personal data to make those ads hyper-specific. This is a significant win for reducing commercial exploitation of young people's online activity.
Section 7 clarifies when a company 'knows' a user is a child or teen. The Federal Trade Commission (FTC) will look at 'competent and reliable evidence' and the overall situation. Interestingly, the bill doesn't force companies to implement age-gating or verification tools, nor does it require them to collect new age-related data. If they do collect age data voluntarily for compliance, they can't use it for anything else or keep it longer than necessary. Enforcement falls to the FTC, treating violations like unfair or deceptive practices, and state attorneys general can also step in to sue companies that aren't playing by the rules (Section 8). This dual enforcement mechanism could mean more robust oversight. The whole thing kicks in a year after it's signed into law (Section 12), giving companies some time to adjust. And just to make sure we understand the bigger picture, the National Science Foundation is tasked with studying how these AI chatbots actually affect kids' and teens' social relationships and mental health (Section 10), which is a smart move for future policy adjustments.