Senators argue for consumer protection
WASHINGTON – Today, U.S. Senators John Hickenlooper and John Thune sent a letter to the Office of Science and Technology Policy (OSTP) Director Arati Prabhakar urging the White House to develop secure, transparent federal standards for artificial intelligence (AI).
“Given the rapid pace of generative AI development, consumers should be provided with accessible tools to authenticate online content (audio, video, text, image) and feel empowered to trust the AI systems they interact with,”the senators wrote. “We need to invest in research and build consensus around authentication techniques between federal agencies, academia, and industry.”
There are currently no comprehensive federal standards for identifying AI-generated content. In their letter, the senators emphasize the need for close federal and private sector collaboration on research and standards development to ensure future AI innovation is trustworthy and secure.
Specifically, the senators encourage OSTP to continue overseeing research into techniques that could help authenticate online content. They argue that developing open standards will be critical to protecting consumers.
In their letter, the senators pose a number of questions related to AI verification tools:
- What current or planned federal research or pilot initiatives will focus on advancing content provenance and certifying the authenticity of AI-generated works?
- What techniques are being explored to prevent watermarks or content authenticity tools from being removed, manipulated, or counterfeited?
- How will watermarking techniques differ for various types of AI-generated content (e.g. audio, video, text, image)?
Senator Hickenlooper has been a strong advocate for federal coordination on AI policy. Last week, Senator Hickenlooper sent a letter to Acting National Cyber Director Kemba Walden encouraging the White House to consider artificial intelligence’s potential threat to the nation’s cyber infrastructure.
This Congress, Hickenlooper chairs the Senate Commerce, Science, and Transportation Subcommittee on Consumer Protection, Product Safety, and Data Security, which has made research into AI one of its main priorities. Today, Hickenlooper will chair a subcommittee hearing on AI Transparency and consumer trust.
In April, Subcommittee Chair Hickenlooper and Ranking Member Blackburn sent a letter to leading technology companies encouraging the implementation of the AI Risk Management Framework (AI RMF) published by the National Institute for Standards & Technology (NIST). In June, Hickenlooper met with the chair of the Federal Trade Commission to discuss proper guardrails for artificial intelligence.
Read the full letter HERE and below:
We are in a moment that is both exciting and concerning for consumers interacting with generative artificial intelligence (AI) technology. While this technology can spur innovation and enhance our creativity, there are growing questions about how to protect consumers from fraud, scams, and deception facilitated by generative AI. The Office of Science and Technology Policy
(OSTP) plays a unique role in coordinating with and building consensus among federal agencies on critical research efforts. We seek to understand the state of federal efforts to develop secure, robust, consensus-based standards for the authentication of AI-generated works in order to promote trust and transparency for Americans.
New generative AI tools are now being used by consumers around the world. ChatGPT and Bard can streamline web searches through the use of AI chatbots, Github Copilot can assist developers in writing code more efficiently, and DALL-E 2 can generate photorealistic images that quickly gain virality. The emergence of generative AI has left consumers concerned about the authenticity of audio and visual content they engage with online. As generative AI technology continues to be integrated into new and existing applications, consumers would benefit from tools that can reliably discern between real and AI-generated content. The White House recently convened seven leading AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) which made voluntary commitments to build trustworthy AI technology. Key among these commitments was the development of an AI provenance and watermarking system that would certify the source of and digitally tag AI-generated content. Other industry groups, such as the Coalition for Content Provenance and Authenticity (C2PA), are also participating in this effort by creating technical standards to verify the origin and history of a piece of digital content, such as an image, video, audio recording, electronic documents or AI-generated content.
Federal agencies also play a critical role in resolving issues related to generative AI. The National Institute for Standards and Technology (NIST) recently established a new Generative AI Public Working Group to understand the risks of generative AI models. Additional content authenticity and provenance research could be manifested through other means, including through the establishment of a National Artificial Intelligence Research Resource (NAIRR), which would provide additional AI R&D opportunities for researchers across the U.S.
Given the rapid pace of generative AI development, consumers should be provided with accessible tools to authenticate online content (audio, video, text, image) and feel empowered to trust the AI systems they interact with. We need to invest in research and build consensus around authentication techniques between federal agencies, academia, and industry.
As public and private sectors explore ways to promote trustworthy and transparent AI systems, we would like to learn more about OSTP’s strategies to support the development of content authenticity tools for the age of generative AI:
1. What current or planned federal research or pilot initiatives will focus on advancing content provenance and certifying the authenticity of AI-generated works?
1. Will such research or pilot initiatives include or consult with representatives from academia, civil society, and the private sector?
2. Will such research or pilot initiatives seek to specifically mitigate against certain types of potential harms to consumers?
2. What techniques are being explored to prevent watermarks or content authenticity tools from being removed, manipulated, or counterfeited?
3. How will watermarking techniques differ for various types of AI-generated content (e.g. audio, video, text, image)?
4. Will watermarking or content provenance systems need to be designed, adapted, or applied in a sector-specific manner (e.g., educators, digital media creators, etc)?
5. How is OSTP coordinating with other federal agencies (i.e., FTC, Dept of Education, and/or NTIA) to implement more consumer education campaigns around content authenticity and digital literacy?
We appreciate your attention to this important matter and look forward to continuing to work to ensure novel innovations in AI deliver positive benefits to society while mitigating risks.