The Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act establishes voluntary guidelines for testing and verifying AI systems to boost public trust and support NIST's risk management framework.
John Hickenlooper
Senator
CO
The Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act aims to foster trust and adoption of AI by establishing voluntary technical guidelines for testing and validating AI systems based on risk. This legislation directs the National Institute of Standards and Technology (NIST) to create these guidelines, which will cover safety, privacy, and transparency. Furthermore, the Act establishes an advisory committee to recommend qualifications for entities performing these AI assurance checks and mandates a study on the capacity of current assurance providers.
If you’ve ever had an automated system mess up your order or a chatbot give you wildly inaccurate information, you know the problem: we need to trust AI more, but we need the systems to earn that trust first. The Validation and Evaluation for Trustworthy (VET) Artificial Intelligence Act aims to tackle this by creating a national framework for checking AI systems.
At its core, the VET AI Act is about creating technical standards to make AI systems safer and more reliable. The Director of the National Institute of Standards and Technology (NIST) has one year to develop and publish a set of voluntary technical guidelines and specifications for AI assurance (SEC. 4). Think of this as the government creating a detailed checklist for how AI should be tested for quality, privacy, and safety. This isn't a mandate; it’s a blueprint that developers and companies can use to prove their AI is trustworthy. The goal is simple: standardized checks should lead to greater public confidence and encourage wider, safer adoption of AI.
These guidelines must cover key areas that affect everyday users, including how to protect consumer privacy, methods for reducing harm an AI might cause, and standards for data quality (SEC. 4). For example, if a company is using AI to screen job applications, these guidelines will dictate how they must test the system to ensure it isn't unfairly penalizing certain demographics, thereby addressing potential negative societal impacts before the system goes live.
The bill defines two crucial roles for checking AI systems. The Developer is the entity that builds the system, and the Deployer is the entity that operates it (SEC. 3). Assurance can be internal (the company checks itself) or external (an independent, nonaffiliated third party checks it). This external assurance is where the rubber meets the road for accountability. To keep things honest, the third-party checker must be completely independent—meaning no corporate ownership ties or shared employees with the developer or deployer (SEC. 3).
To figure out who is qualified to perform these essential external checks, the Secretary of Commerce must establish an Artificial Intelligence Assurance Qualifications Advisory Committee within 90 days of the guidelines being published (SEC. 5). This committee, featuring experts from universities, industry, consumer groups, and labor unions, will spend a year studying existing certification methods and recommending specific qualifications, licensing, and expertise needed for AI auditors. This step is critical because it moves the industry toward professionalizing the role of the AI safety auditor, much like we have for financial auditors or building inspectors.
Because this framework is voluntary, it won't immediately force every AI developer to comply. However, it creates a powerful incentive: companies that adhere to the VET standards will gain a competitive edge by being able to market their systems as “NIST-vetted.” This could be crucial for a small business looking to adopt a new inventory management AI; they can choose the system that has passed the rigorous external assurance check, reducing their risk of costly operational failures or privacy breaches.
To ensure the market can keep up, the Secretary must also conduct a study on the current capacity of the AI assurance market (SEC. 6). They need to figure out if there are enough qualified people and facilities—including potentially using existing accredited labs—to actually perform these checks. If the guidelines are great but nobody can afford or find a qualified auditor, the whole system stalls. This proactive study ensures the government is looking at the practical challenges of implementation, recognizing that robust standards require a robust infrastructure to support them.