The CREATE AI Act of 2025 establishes the National Artificial Intelligence Research Resource (NAIRR) to democratize access to essential AI computing power and data for diverse U.S. researchers, students, and small businesses.
Jay Obernolte
Representative
CA-23
The CREATE AI Act of 2025 establishes the National Artificial Intelligence Research Resource (NAIRR) to democratize access to the high-powered computing and data necessary for advanced AI research. This resource will be managed by a non-governmental Operating Entity overseen by a new Steering Subcommittee. The goal is to broaden participation in AI development beyond large tech companies to maintain U.S. leadership and ensure AI benefits all Americans. Access will be prioritized for U.S.-affiliated researchers, educators, and students, with specific provisions for ethics, security, and privacy reviews.
This new legislation, the CREATE AI Act of 2025, is focused on solving a major problem in the world of artificial intelligence: only a handful of massive tech companies have the necessary computing power and data to do cutting-edge AI research. Think of it like this: if AI is the new oil, only a few super-tankers can access the deep-sea wells. This bill aims to build a public pipeline.
Specifically, the Act mandates the creation of the National Artificial Intelligence Research Resource (NAIRR) within one year of enactment. The goal is simple: democratize access to the tools needed for innovation. This isn't just about fairness; the bill’s findings section notes that the U.S. needs to bring in talent from diverse backgrounds—students, small businesses, and university researchers—to maintain global leadership in AI. If you're a grad student with a brilliant idea but no access to a supercomputer, this resource is for you.
The NAIRR is essentially a massive, federally coordinated resource pool. It will offer three main things: computational power (cloud, hybrid, and on-premises access), massive datasets (including establishing an AI open data commons), and access to specialized AI testbeds for benchmarking. The bill makes it clear who gets to use this: U.S.-based researchers, educators, and students affiliated with universities, non-profits, certain government agencies, or small businesses that have received federal funding. If you run a small manufacturing shop looking to use AI to optimize your supply chain, and you've got an SBIR grant, you're likely in.
Crucially, the bill requires the NAIRR to maintain a free tier for access, supported by federal funding, even though the Operating Entity running the resource can establish a variable fee schedule for other users. This means that cost shouldn't be a barrier for the most promising research or educational projects.
This isn't a government-run computer lab. The bill sets up a layered oversight structure. The National Science Foundation (NSF) will house a new Program Management Office (PMO) to handle the day-to-day administration. However, the actual operation of the NAIRR—maintaining the user portal, managing access, and hiring staff—will be delegated to an external, non-governmental "Operating Entity" selected through a competitive process. This structure is designed to leverage private-sector efficiency while maintaining public oversight.
The high-level strategy is managed by a new NAIRR Steering Subcommittee, chaired by the Director of the Office of Science and Technology Policy (OSTP). This subcommittee approves the operating plan, reviews the budget, and sets key performance indicators (KPIs) to measure success annually. This separation of duties—strategy at the top, management in the middle, and operation by a non-profit entity—is intended to keep the resource focused and nimble.
One of the most important provisions is the focus on AI safety and ethics. The PMO must establish requirements and review processes for applications concerning privacy, ethics, security, and trustworthiness. Even better, the bill mandates that when granting access to computational power, the NAIRR must prioritize projects focused on these areas, ensuring a significant percentage of annual resources goes toward them. This is the bill saying, “We want great AI, but we need safe, trustworthy AI first.”
Furthermore, security requirements must align with NIST’s Cybersecurity Framework, and the Operating Entity must designate a research security point of contact to comply with federal research security policies. For researchers, this means your project application will face scrutiny not just on its scientific merit, but also on how it handles data privacy and security.
While this bill is a huge step toward broadening access, there are a few governance details worth noting. The PMO must establish advisory committees to guide the resource, drawing members from industry, academia, and public interest groups. However, the bill specifies that these specific advisory committees are exempt from standard federal advisory committee transparency rules (Chapter 10 of title 5, U.S. Code). This means those advisory discussions won't be subject to the usual public scrutiny, which is a small reduction in transparency for a resource that is otherwise very open.
Also, the bill is very clear on who is excluded: no individual employed by or acting on behalf of a foreign country listed in a specific section of U.S. code (section 4872(d)(2) of title 10) can be an eligible user. This is a direct measure to ensure national security and prevent certain adversarial nations from benefiting from this new taxpayer-funded resource.