The KIDS Act establishes comprehensive federal standards to shield minors from online obscenity, mitigate harms on social media and gaming platforms, regulate AI chatbot interactions, and fund research into digital safety.
Brett Guthrie
Representative
KY-2
The KIDS Act is a comprehensive bill designed to shield minors from online harm across various digital platforms. It mandates age verification for sexually explicit websites, imposes new safety and parental control requirements on social media and gaming platforms, and sets rules for interactions with AI chatbots. The legislation also funds research and education initiatives to better understand and combat online risks facing young users.
The KIDS Act is a massive overhaul of how the internet works for anyone under 17, aiming to strip away the 'Wild West' nature of social media and gaming. It targets the specific features that keep kids glued to screens—like infinite scrolling and reward badges—while forcing platforms to hide direct messaging for kids under 13 and banning disappearing messages for all minors. Beyond just social apps, the bill hits AI chatbots and gaming platforms, requiring them to prove they aren't talking kids into harmful behavior or letting strangers slide into their DMs without a parent’s green light. Under Title II and III, these safety settings must be the 'most protective' by default, meaning the burden shifts from parents having to find the 'off' switch to companies having to prove why they’d ever turn it on.
Digital Speed Bumps and Privacy Shields For the average parent, this means your 14-year-old’s TikTok or Instagram feed would look fundamentally different. Section 2 of the bill restricts 'personalized recommendation systems,' which are the algorithms that feed users content based on their data. Instead of a rabbit hole of auto-playing videos, the bill pushes for more control over what shows up. For the office worker or tradesman whose kid is obsessed with Roblox or Fortnite, Title III (the Safer GAMING Act) requires game companies to build in tools that let you cap spending and limit who can talk to your child. It also tackles the 'creepy' factor of AI: under Title IV, if a teen is chatting with a bot, that bot has to disclose it’s not a human and is legally barred from pretending to be a doctor or therapist.
The Age Check and Data Dilemma One of the biggest shifts comes from the SCREEN Act (Title I), which requires websites where more than one-third of the content is sexually explicit to use 'commercially available' age verification. While the bill explicitly says you don't have to hand over a government ID, it creates a new hurdle for accessing parts of the web. This is a double-edged sword; while it aims to keep kids off adult sites, it puts a lot of pressure on tech companies to manage verification data securely. If you’re a small business owner running a niche platform, the 'Economic Burden' identified in the analysis is real—you’ll be on the hook for audits, new security software, and potentially higher legal costs to stay compliant with the FTC’s new enforcement powers.
Vague Rules and Future Friction Because the bill uses broad terms like 'addictive design' and 'harmful activities,' there’s a 'Medium' level of vagueness that could lead to some headaches. For example, a feature one person calls 'engaging,' a regulator might call 'addictive.' This could lead to platforms over-correcting and stripping away useful features just to avoid a lawsuit from a state attorney general. Additionally, while the bill mandates 'break reminders' after three hours of AI use, it’s unclear how effectively a bot can actually police a determined teenager. We’re looking at a future where the digital experience for a 16-year-old is heavily sanitized and monitored, which provides a massive safety net but also changes the fundamental way the next generation learns to navigate the open web.