Auditors would verify risk management claims of AI companies and compliance with AI guardrails
WASHINGTON âU.S. Senators John Hickenlooper and Shelley Moore Capito reintroduced their bipartisan Validation and Evaluation for Trustworthy Artificial Intelligence (VET AI) Act. The bill directs the National Institute of Standards and Technology (NIST) to work with federal agencies and stakeholders across industry, academia, and civil society to develop detailed specifications, guidelines, and recommendations for third-party evaluators to work with AI companies to provide robust independent external assurance and verification of how their AI systems are developed and tested.
âThe horse is already out of the barn when it comes to AI. The U.S. should lead in setting sensible guardrails for AI to ensure these innovations are developed responsibly to benefit all Americans as they harness this rapidly growing technology,â said Hickenlooper, Ranking Member of the Senate Commerce Committeeâs Subcommittee on Consumer Protection, Technology, and Data Privacy.
âThe VET AI Act is a commonsense bill that will allow for a voluntary set of guidelines for AI, which will only help the development of systems that choose to adopt them. I was proud to join Senator Hickenlooper in reintroducing this legislation, and I look forward to getting this bill passed out of the Commerce Committee soon,â Capito said.
Currently, AI companies make claims about how they train, conduct safety red-team exercises, and carry out risk management on their AI models without any independent verification. The VET AI Act would create a pathway for independent evaluators, with a function similar to those in the financial industry and other sectors, to work with companies as a neutral third-party to verify their development, testing, and use of AI is in compliance with established guardrails. As Congress moves to establish AI guardrails, evidence-based benchmarks to independently validate AI companiesâ claims on safety testing will only become more essential.Â
The senators initially introduced the bill in the 118th Congress and helped pass it out of the Senate Commerce Committee last year. For the full bill text, clickHERE.
Last year, Hickenlooper proposed a âTrust, but Verify Frameworkâ, which included a call to establish auditing standards for Artificial Intelligence (AI) systems in order to increase transparency and adoption of AI while protecting consumers. In the same speech, Hickenlooper also called for federal data privacy legislation to create a national standard for protecting Americans’ personal and sensitive data and encouraged collaboration with international partners to promote democratic values in AI.
Specifically, the VET AI Act would:
- Direct NIST, in coordination with the Department of Energy and National Science Foundation, to develop voluntary specifications and guidelines for developers and deployers of AI systems to conduct internal assurance and work with third parties on external assurance regarding the verification and red-teaming of AI systems.
- Such specifications require considerations for data privacy protections, mitigations against potential harms to individuals from an AI system, dataset quality, and governance and communications processes of a developer or deployer throughout the AI systemsâ development lifecycles.
- Such specifications require considerations for data privacy protections, mitigations against potential harms to individuals from an AI system, dataset quality, and governance and communications processes of a developer or deployer throughout the AI systemsâ development lifecycles.
- Establish a collaborative Advisory Committee to review and recommend criteria for individuals or organizations seeking to obtain certification of their ability to conduct internal or external assurance for AI systems.
- Require NIST to conduct a study examining various aspects of the ecosystem of AI assurance, including the current capabilities and methodologies used, facilities or resources needed, and overall market demand for internal and external AI assurance.
âBPC Action applauds Sens. Shelley Moore Capito (R-WV) and John Hickenlooper (D-CO) for their thoughtful leadership on the VET Artificial Intelligence Act. This bill will spur AI development and deployment while also protecting Americans from privacy harms. Its commonsense public-private sector approach will help grow consumer trust in AI, ensuring more Americans and businesses can reap its benefits. At a time when AI is being used by countless Americans in nearly every sector of our economy, we’re excited to see Sens. Moore Capito and Hickenlooper work together to craft this timely bill,â says Michele Stockwell, president of Bipartisan Policy Center Action (BPC Action).
âThe reintroduction of the Validation and Evaluation for Trustworthy (VET) AI Act in this Congress is a timely and important step toward establishing clear and consistent guidance for AI developers, deployers, and third-party evaluators,â said Daniel Correa, Chief Executive Officer of the Federation of American Scientists. âBy outlining key processes like verification, red teaming, and compliance, the bill helps fill a critical gap as the U.S. shapes its broader AI governance strategy. The bill also reinforces the central role of NIST in driving innovative and equitable AI standardsâand highlights the need to ensure the agency has the resources necessary to lead. We commend Senators Hickenlooper and Capito for their bipartisan leadership on this global policy issue and for advancing a vision of U.S. leadership in setting standards for AI safety and accountability. â
âThe Institute of Internal Auditors (The IIA) is proud to support the bipartisan VET AI Act. As artificial intelligence and other emerging technologies transform the global economy, it is essential that Congress establish commonsense guardrails designed to protect American businesses and consumers. The VET AI Act takes an important step in this direction by recognizing, in part, the critical role of independent internal assurance in advancing organizational transparency and accountability. The IIA commends Senator Hickenlooper and Senator Capito for their leadership on this important issue and looks forward to continued engagement with congressional leaders to help advance the VET AI Act,âsaid Anthony J. Pugliese, CIA, CPA, CGMA, CITP President and Chief Executive Officer, The Institute of Internal Auditors.
âTo fully harness the enormous power of AI, it’s critical that we empower AI developers and deployers with stakeholder-informed, voluntary frameworks for assuring the resilience of their AI systems — to include cybersecurity. Palo Alto Networks firmly believes that AI adoption and security go hand-in-hand, and applauds Senators Hickenlooper and Capito for this common sense legislation,â said Daniel Kroese, Vice President, Public Policy & Government Affairs, Palo Alto Networks.
âSenator Hickenlooperâs and Senator Capitoâs VET AI Act is a smart, balanced approach toward developing voluntary standards for third-party evaluators to assess the safety of AI systems, promoting innovation, adoption, and accountability,â said Karan Bhatia, VP of Google Government Affairs and Public Policy.Â
âDeveloping responsible AI requires flexible regulation. The VET AI Act takes important steps towards ensuring a multistakeholder process to reinforce privacy and reduce harms through voluntary consensus standards that will assist those who continue to innovate in AI deployment,â said Josh Landau, Senior Counsel for Innovation Policy at CCIA.
âAs Americans increasingly rely on AI systems to make decisions in every aspect of daily life â from travel to banking to healthcare â many remain wary about who or what to trust when using these systems. IEEE-USA believes that the Validation and Evaluation for Trustworthy (VET) AI Act is a necessary and significant step towards addressing public concerns while encouraging adoption and innovation. Industry-led technical standards, such as those that IEEE develops, are valuable tools that enable third party validation and verification of systems which helps to minimize the risks that AI systems may pose when deployed. We are thankful that Senators Hickenlooper and Capito have introduced this legislation acknowledging the role that technical standards can play in ensuring that AI systems are demonstrably protecting American interests,â said Tim Lee, President, IEEE-USA.
âBooz Allen proudly supports the bipartisan âValidation and Evaluation for Trustworthy (VET) Artificial Intelligence Actâ. We welcome the billâs focus on practical, use-case-based evaluation and third-party assurance, which helps build public confidence in AI systems and fostering innovation. By advancing voluntary, evidence-based guidelines, the VET AI Act reflects the kind of flexible framework needed to accelerate U.S. leadership in AI. Its emphasis on testing, validation, and external verification promotes transparency and trust, which are key drivers of widespread AI adoption. We are encouraged by the billâs commitment to unlocking AIâs full potential while ensuring that systems are tested and trusted, not overregulated,âsaid John Larson, Executive Vice President, Head of Booz Allenâs AI Business.
âThe Software & Information Industry Association (SIIA) proudly supports the bipartisan VET AI Act from Senators Hickenlooper and Capito. This bill lays essential groundwork for strengthening trust in AI while supporting continued U.S. leadership in AI and innovation. By directing NIST to work with government, industry, and civil society on clear standards for third-party evaluation, the VET AI Act helps ensure AI systems are aligned with public interest throughout their lifecycle, as these standards would factor in privacy protections, harm mitigation, dataset quality, and governance. We look forward to working with Congress to advance this important legislation,â said Paul Lekas, Head of Global Public Policy & Government Affairs and Senior Vice President of SIIA.
âItâs important that AI companies donât just grade their own homework. The VET AI Act brings much-needed structure to how we verify safety claims in this fast-moving field. Clear, independent standards for review are going to be key to building public trust and accelerating adoption of AI, which in turn support innovation. ARI applauds Senators Hickenlooper and Capito for their work to make AI more transparent and trustworthy,â said Brad Carson, President of Americans for Responsible Innovation.
“The Center for AI Policy (CAIP) strongly endorses the reintroduction of the Validation and Evaluation for Trustworthy (VET) AI Act championed by Senators Hickenlooper and Capito. While companies increasingly conduct model evaluations, these assessments remain opaque to outside stakeholders, who cannot verify their completeness, accuracy, or adequacy. By establishing standards and studying the evaluations sector, the VET AI Act promotes much-needed transparency. CAIP supports this pragmatic step towards ensuring that AI systems advance American competitiveness and security rather than introducing severe risks, and hopes to see this valuable legislation succeed in the new Congress,â said Jason Green-Lowe, Executive Director for the Center for AI Policy.
###