PolicyBrief
S. 2750
119th CongressSep 10th 2025
SANDBOX Act
IN COMMITTEE

The SANDBOX Act establishes a temporary regulatory program allowing companies to test innovative AI products under modified federal rules after demonstrating benefits outweighing potential risks.

Ted Cruz
R

Ted Cruz

Senator

TX

LEGISLATION

New 'SANDBOX Act' Lets AI Companies Skip Federal Rules for Two Years to Test Products

The new Strengthening Artificial Intelligence Normalization and Diffusion By Oversight and Experimentation Act, or the SANDBOX Act, is setting up a fast lane for AI innovation. Essentially, this law creates an Artificial Intelligence Regulatory Sandbox Program managed by the Office of Science and Technology Policy (OSTP). The main goal? To let companies test new AI products, services, or development methods for up to two years—renewable for up to four more times—without having to follow specific, existing federal rules and regulations that might slow them down.

The Regulatory Fast Pass

Think of the Sandbox program as a temporary regulatory waiver. If a company wants to test an AI product—say, a new automated financial advisor or a self-driving delivery bot—they can apply to the OSTP Director, listing exactly which federal rules they need to skip and why. This is a big deal because it means that for the testing period, the product won’t be held to the same standards as everything else. Companies must detail how their product will benefit consumers, create jobs, or boost the economy, and they must explain how they plan to mitigate risks like "health and safety risk," "risk of economic damage" (tangible harm to property), or "unfair or deceptive trade practice." If approved by all relevant agencies, the company gets its waiver, but they have to sign a written agreement detailing the exact rules they must follow to manage the risks.

Who’s Taking the Risk?

While this could speed up beneficial AI development, it introduces a significant shift in risk. Since existing consumer protections are temporarily waived, the burden of potential harm shifts to the consumer interacting with the product. The bill requires companies to be transparent: they must post public warnings that the product is being tested, might not work right, and could expose consumers to risks. Crucially, companies must report any incident causing consumer harm within 72 hours. This tight timeline is meant to protect the public, but it places a lot of faith in the company’s ability to detect and report harm quickly, especially with complex AI systems where harm might not be immediately obvious.

The 90-Day Clock and Congressional Oversight

The process for getting approved is surprisingly quick for the government. Once an application is complete, the relevant agencies have just 90 days to decide whether to grant the waiver or not. If an agency doesn't respond within that time, the OSTP Director can assume they don't object and move forward. This 90-day clock puts immense pressure on agencies like the FDA, FTC, or SEC to review complex AI applications quickly, potentially straining their resources and increasing the chance of oversight errors.

On the back end, the Sandbox creates a mechanism for permanent regulatory change. Every year, the OSTP Director must report to Congress a list of all the rules that were waived and that companies operated safely without. This sets up Congress to potentially repeal or permanently amend those rules, using the sandbox as a real-world testing ground for regulatory reform. The entire program is set to automatically end after 12 years, suggesting this is a temporary experiment in governance.