The Responsible Innovation and Safe Expertise Act of 2025 establishes conditional civil liability immunity for AI developers who provide transparent documentation and clear usage warnings to learned professionals utilizing their systems.
Cynthia Lummis
Senator
WY
The Responsible Innovation and Safe Expertise (RISE) Act of 2025 aims to encourage responsible AI development by establishing clear rules for transparency and liability. The bill grants conditional civil immunity to AI developers if they provide comprehensive documentation, including Model Cards and Specifications, detailing their AI's capabilities and limitations. This protection shields developers from liability when a licensed professional uses the AI in their practice, provided the developer maintains up-to-date documentation. Ultimately, the Act seeks to foster trust and manage risks associated with rapidly advancing AI technologies.
The Responsible Innovation and Safe Expertise Act of 2025, or the RISE Act, is essentially a grand bargain for the AI industry, set to take effect on December 1, 2025. What it does is simple: it gives companies that develop Artificial Intelligence tools a major shield against lawsuits when their products mess up. But this protection only kicks in if they play by a strict set of transparency rules.
Under this bill, an AI developer gets conditional immunity from civil liability—meaning you generally can't sue them—if their AI makes an "error" while being used by a "learned professional." Think of a doctor using an AI diagnostic tool or a lawyer using an AI research assistant. If the AI spits out bad advice and causes harm, the developer is off the hook, provided they did two things right.
First, they must publish two key technical documents: a Model Card and a Model Specification. The Model Card is the public report card, detailing the AI’s training data, performance, limits, and intended use. The Model Specification is the blueprint—the configuration instructions and system prompts that define the AI’s behavior. The idea is that if the developer is upfront about the AI’s limits, they shouldn't be held liable when a professional misuses it or ignores the warnings. This mandatory transparency (SEC. 4) is a big win for those of us who believe these black-box systems need to be opened up.
While the transparency requirements are good, the main concern here is where the liability shifts. If the developer meets all the transparency requirements, the risk of an AI error shifts heavily onto the learned professional. This bill specifically defines professionals—those licensed doctors, lawyers, and engineers—as individuals who must use their own independent professional judgment, even when they use AI tools (SEC. 3). For a small clinic or a solo law practice, this means they become the primary target for lawsuits when an AI error causes harm to a client.
Imagine a physician uses an AI tool that, despite the developer’s warnings, misdiagnoses a rare condition. If the developer followed the rules, the patient’s recourse is likely limited to suing the physician, not the deep-pocketed tech company that built the faulty tool. This effectively means that while developers get protection, the financial and professional burden is placed on the people on the front lines using the technology.
There are also some tricky elements in the documentation requirements. Developers are allowed to redact (black out) parts of the Model Specification if they contain trade secrets that aren't related to safety (SEC. 4(b)(1)). This is a potential loophole. What one company considers a non-safety-related trade secret, a client might consider crucial information about how the AI fails. The vagueness here could lead to developers hiding details that users—or courts—need to assess the risks.
Crucially, the immunity shield is fragile. If a developer discovers a new way their AI can fail—a new bug, a new bias, or a new vulnerability—they have only 30 days to update their Model Card and Specification (SEC. 4(c)). If they miss that deadline and the failure causes harm, they lose the immunity for that specific incident. For complex, rapidly evolving AI systems, keeping that documentation perfectly current is a major operational challenge. If they fail, they are exposed; if they succeed, the professional and the client still bear the primary risk.
In short, the RISE Act aims to spur innovation by giving AI developers a clear path to avoid liability, but it does so by making the professional user the ultimate shock absorber for AI errors. For the average person, this means if you receive bad advice or service that involved an AI, your ability to sue the developer is severely restricted, and you'll likely be pursuing the individual professional instead.