This bill establishes a federal duty of care and safety requirements for online platforms regarding minors, mandates algorithmic transparency and user choice regarding content personalization, and sets enforcement mechanisms.
Marsha Blackburn
Senator
TN
The Kids Online Safety Act establishes a comprehensive "duty of care" requiring online platforms to design features that mitigate specific harms to minors, such as those related to mental health and exploitation. It mandates that platforms provide minors with default, restrictive safety settings and robust parental controls over data sharing and recommendations. Furthermore, the legislation demands significant transparency regarding algorithmic ranking systems and grants the FTC enforcement authority.
The Kids Online Safety Act (KOSA) is a massive piece of legislation aimed at reshaping how major online platforms—think social media, video games, and messaging apps—interact with users they know are minors (defined broadly as under 17). At its core, the bill imposes a new “duty of care” (Sec. 102) on these platforms, forcing them to use “reasonable care” in their design to prevent or lessen specific harms to minors. These harms include promoting eating disorders, substance use, suicide, severe online harassment, and sexual exploitation. If you’re a parent, this is the section that says platforms can’t just shrug off content that’s actively trying to hurt kids.
For any user the platform knows is a minor, the game changes completely. Platforms must provide easy-to-use safety tools that allow minors to limit who can contact them, restrict others from seeing their personal data, and control design features meant to keep them hooked, like infinite scrolling and push notifications (Sec. 103). Crucially, the bill mandates that the default setting for all these safety features must be the most protective option available. This means no more hunting through obscure settings menus; the platform starts in “safe mode” unless a parent actively changes it. For parents, KOSA requires platforms to provide tools to manage account settings, block purchases, and monitor time spent on the service, with these controls being on by default for children (under 13).
Perhaps the most significant change for all users, regardless of age, comes in Title II, which tackles algorithmic transparency. If a platform uses an “opaque algorithm”—one that customizes your content feed based on your past behavior and personal data—it must now give you a choice (Sec. 202). Users must be able to easily switch to an “input-transparent algorithm,” which serves up content without using your history or inferred data for ranking. This is a big deal for anyone tired of the “filter bubble.” If you want to see content chronologically or based only on basic preferences you set, the platform must now offer that option without penalizing you for using it. For minors, platforms must also allow them to opt out of personalized recommendations entirely.
For the largest platforms (over 10 million U.S. users), KOSA demands serious accountability. These companies must issue a public report at least once a year, based on an audit by an independent third party (Sec. 105). This report has to detail everything from the total number of known minor users and their average time spent on the platform to an evaluation of how effective their safety measures actually are. This is intended to give researchers, parents, and regulators real data, rather than just corporate talking points, about what’s happening on these platforms.
While the bill is aggressive in setting safety standards, it contains two important caveats that could limit its effectiveness. First, the law explicitly states that the new safety rules cannot be used to enforce restrictions based on the opinions or viewpoints of user speech protected by the First Amendment (Sec. 102). This creates a potential loophole: platforms could argue that mitigating harmful but protected speech (like extreme political content or certain forms of harassment) is beyond the scope of the law. Second, and perhaps more critically, the bill clarifies that it does not require platforms to collect new personal data for the purpose of age verification (Sec. 112). This means that while platforms must protect users they know are minors, they are not forced to implement the kind of robust age-gating systems that would ensure all children under 13 are correctly identified and placed under parental controls. This reliance on existing knowledge means that many minors who lie about their age could continue to slip through the cracks.