PolicyBrief
S. 3062
119th CongressOct 28th 2025
GUARD Act
IN COMMITTEE

The GUARD Act establishes criminal prohibitions and civil requirements for covered entities regarding age verification, data security, and content disclosures for artificial intelligence chatbots accessible to minors.

Joshua "Josh" Hawley
R

Joshua "Josh" Hawley

Senator

MO

LEGISLATION

GUARD Act Mandates Strict Age Verification for All AI Chatbot Users, Imposing $100K Fines for Harmful Content

The Guidelines for User Age-verification and Responsible Dialogue Act of 2025, or the GUARD Act, is a major piece of legislation aimed squarely at regulating artificial intelligence chatbots, especially to protect minors. It tackles two big areas: making it a federal crime to design AI that targets kids with harmful content, and forcing every single user of a wide range of AI services to go through a strict age verification process.

Specifically, the bill creates new criminal offenses, punishable by fines up to $100,000 per offense, for anyone who develops or makes available an AI chatbot that they know, or should know, creates a risk of soliciting minors for sexually explicit conduct or promoting suicide, self-harm, or violence. On the user side, it mandates that companies providing AI chatbots must require every user to create an account and verify their age using a “reasonable age verification measure”—meaning simply clicking a box or typing in a birth date won't cut it anymore (SEC. 3).

The End of Anonymous Chatbots

If you use any kind of AI chatbot—and the definition here is broad, covering any interactive service that creates new, adaptive content—get ready to prove your age. The GUARD Act requires all covered entities to implement a “reasonable age verification process” for every user, both new and existing (SEC. 5). For existing accounts, the bill says those accounts must be frozen 180 days after the bill becomes law until the user provides verifiable age data.

This is a huge shift. If you’re a software developer using an AI coding assistant, or a student using an AI research tool, you will likely need to upload government ID or use some other “commercially reasonable” method to verify you are over 18. The bill is particularly focused on preventing minors from accessing “AI Companions”—chatbots designed to simulate emotional interaction or friendship—by blocking access entirely if the user is identified as a minor (SEC. 6).

The Privacy Trade-Off

Here’s the catch for the average user: To gain access to your AI tools, you have to hand over sensitive data. Recognizing this, the bill puts massive data security obligations on the companies (Covered Entities) collecting this information. They must use industry-standard encryption, limit collection to only what is minimally necessary to verify age, and cannot share, transfer, or sell this data to anyone (SEC. 5). For users concerned about their data being sold off, this is a clear benefit, but it also means a vast amount of sensitive personal data is now concentrated in the hands of AI providers.

For the companies themselves, the compliance burden is enormous. They have to build these robust verification systems, implement strict security protocols, and even “periodically review” previously verified accounts to ensure ongoing compliance. With civil penalties up to $100,000 per violation of these rules, the stakes for getting it wrong are incredibly high.

Goodbye, AI Therapist

Beyond age verification, the GUARD Act demands transparency. Every AI chatbot must clearly and conspicuously disclose to the user that it is an artificial intelligence system and not a human being. This disclosure must happen at the start of every conversation and every 30 minutes thereafter (SEC. 5). Crucially, the AI is prohibited from claiming to be a licensed professional—like a therapist, doctor, or lawyer—and must disclose that it does not provide medical, legal, or financial services.

This is good news for anyone who has seen the unsettling trend of people relying on chatbots for serious advice. If you ask your chatbot for investment tips, it now has to remind you that it’s just a piece of software and you should consult a human professional. This provision cuts through the deceptive marketing and sometimes over-enthusiastic claims of AI systems, ensuring users know exactly what they are interacting with.