Legislation would create targeted guardrails for high-risk AI applications
WASHINGTON – Today, U.S. Senator John Hickenlooper joined Senate Commerce, Science, and Transportation Committee colleagues John Thune, Amy Klobuchar, Roger Wicker, Shelley Moore Capito, and Ben Ray Luján to introduce the Artificial Intelligence Research, Innovation, and Accountability Act. The bipartisan legislation establishes a framework to bolster innovation while bringing greater transparency, accountability, and security to the development and operation of the highest-impact applications of AI.
“We’re entering a new era of Artificial Intelligence,” said Hickenlooper.“Development and innovation will depend on the guardrails we put in place. This is a commonsense framework that protects Americans without stifling our competitive edge in AI.”
“AI is a revolutionary technology that has the potential to improve health care, agriculture, logistics and supply chains, and countless other industries,” said Thune. “As this technology continues to evolve, we should identify some basic rules of the road that protect consumers, foster an environment in which innovators and entrepreneurs can thrive, and limit government intervention. This legislation would bolster the United States’ leadership and innovation in AI while also establishing common-sense safety and security guardrails for the highest-risk AI applications.”
Background on the legislation:
Content Provenance and Emergence Detection Standards: To provide clearer distinctions between human and AI-generated content, the bill would require the National Institute of Standards and Technology (NIST) to carry out research and develop standards for providing both authenticity and provenance information for online content.
Generative AI Transparency: The bill would require internet platforms to provide notice to users when the platform is using generative AI to create content the user sees. The U.S. Department of Commerce would have the authority to enforce this requirement.
NIST Recommendations to Agencies: NIST would be required to develop recommendations to agencies for technical, risk-based guardrails on “high-impact” AI systems, in consultation with other agencies and non-government stakeholders. The Office of Management and Budget would be tasked with interagency implementation of such recommendations.
Risk Management Assessment and Reporting: The bill would require companies deploying critical-impact AI systems- such as those operating critical infrastructure or conducting facial recognition-to perform detailed risk assessments. These reports would provide a comprehensive outline of how the organizations manage, mitigate, and understand risk. Deployers of “high-impact” AI systems making decisions that could impact an individuals’ access to key services would be required to submit transparency reports to the Commerce Department.
Critical-Impact AI Certification: The bill would require critical-impact AI systems to be subject to a certification framework, in which critical-impact AI organizations would self-certify compliance with standards set out by the Commerce Department.
AI Consumer Education: The bill would require the Commerce Department to establish a working group to provide recommendations for the development of voluntary, industry-led consumer education efforts for AI systems.
View a one pager on the bill here.
Support for the legislation:
“The Coalition for Content Provenance and Authenticity (C2PA) is encouraged to see mechanisms to address content provenance and authenticity in the bipartisan introduction of the AI Research, Innovation, and Accountability Act. With proliferation of deceptive deepfakes and increased availability of powerful creation and editing tools driven by generative AI, establishing the provenance of media is critical to ensure transparency, understanding, and trust in digital content online. We applaud Senator Capito, Senator Hickenlooper, Senator Klobuchar, Senator Lujan, Senator Thune and Senator Wicker’s leadership in introducing this bill. The C2PA will continue to update its open technical standard and provide an interoperable, opt-in provenance-based technology called Content Credentials, which functions like a nutrition label for content – allowing creators a way to show their work and consumers a way see valuable context alongside the content they are consuming.”
View additional statements of support here.