PolicyBrief
H.R. 7985
119th CongressMar 18th 2026
CHATBOT Act
IN COMMITTEE

The CHATBOT Act mandates transparency for AI chatbots by prohibiting them from deceptively implying they possess professional licenses or are operated by licensed human professionals.

Kevin Mullin
D

Kevin Mullin

Representative

CA-15

LEGISLATION

CHATBOT Act Targets AI Deception: New Rules Prohibit Bots from Faking Professional Licenses in Health, Law, and Finance

The CHATBOT Act sets a hard line against digital deception by requiring AI companies to be honest about whether their bots are licensed professionals. Under Section 2, any company deploying an AI chatbot is prohibited from generating output that falsely implies the bot has a professional license or that a human expert has verified its answers. This applies specifically to 'high-stakes' sectors: healthcare, legal services, finance, insurance, and accounting. If a bot gives you medical advice or tax help, it can no longer pretend it’s a doctor or a CPA unless that’s actually the case. The bill gives the Federal Trade Commission (FTC) 12 months to roll out specific compliance guides, treating any violations as 'unfair or deceptive acts' under existing federal law.

No More Digital Dress-Up

This legislation focuses on the 'reasonable user' perspective to determine if a bot is being shady. For example, if you’re using a legal-tech app and the AI claims to have 'years of experience in the courtroom' or uses a fictitious bar association number to sound more authoritative, that’s a violation under the 'Prohibited Practices' section. However, the bill is careful to distinguish between expert advice and general info. A bot can still explain the general process of filing for divorce or how a deductible works without breaking the law, as long as it doesn’t suggest it has the specific professional credentials required by your state.

Putting Teeth in the Tech

What makes this bill stand out is how it handles enforcement. It doesn’t just wait for the FTC to act; it empowers state attorneys general and everyday citizens to take legal action. Under the 'Private Right of Action' provision, if you are harmed by a bot that lied about its credentials, you can sue in federal court for actual damages or $5,000 per violation—whichever is higher. If the company lied on purpose ('willful or knowing'), a judge can triple those damages. To keep these penalties relevant as the cost of living rises, the bill mandates that the $5,000 fine be adjusted annually for inflation based on the Consumer Price Index.

Accountability for the AI Age

For the digital natives and busy professionals who rely on quick online tools, this creates a much-needed safety net. If you’re a small business owner using an AI tool to handle your payroll or a parent checking a bot for medical symptoms, you deserve to know if the 'advice' is coming from a verified expert or just a sophisticated word-generator. While this might increase legal risks for AI developers and companies using misleading marketing, the bill explicitly protects state laws that offer even stronger consumer protections. By creating a five-year window for individuals to file lawsuits after discovering a violation, the Act ensures that companies can't just hide behind complex algorithms and hope the clock runs out.