Press Releases

Hickenlooper Proposes AI Auditing Standards, Calls for Protecting Consumer Data, Increasing Transparency

Feb 5, 2024

Hickenlooper laid out Artificial Intelligence (AI) framework for regulations in keynote speech

BOULDER, COLO. – Today, U.S. Senator John Hickenlooper proposed AI auditing standards and laid out a framework on AI regulation to ensure transparency and user literacy, protect consumer data, and build an international coalition in his keynote speech at the Silicon Flatirons Flagship Conference.

We can’t let an industry with so many unknowns and potential harms police itself,” said Senator Hickenlooper. “We need qualified third parties to effectively audit Generative AI systems and verify their claims of compliance with federal laws and regulations.”

While many AI companies are proactively and voluntarily conducting risk assessments, qualified third parties are needed to audit Generative AI systems and verify compliance with federal laws and regulations. Clear, defined auditing practices – like the financial audits companies undertake – for both classical AI systems and Generative AI systems encourage collaboration and grow trust between industry and the public.

“We’re at a historic inflection point with AI and society. And we need to be asking certain questions. Do we want to live in a world where Generative AI potentially displaces thousands of workers? Do we want our human creativity undercut by a Large Language Model? What rights should people have if they are harmed by a company’s Generative AI system?,” said Senator Hickenlooper. “These should be answered by each one of us. And Congress, elected by the American people, should pass laws to carry out these decisions. Not the for-profit AI companies themselves.”

“That’s why we need a new framework to regulate AI, one I’m going to call ‘Trust but verify, holding AI to its promise.’”

Hickenlooper’s framework focuses on three areas:

  1. AI Transparency and Literacy
  • AI systems should be transparent about the data models they are trained on and the personal data the companies collect. Whether it is determining when consumers are seeing AI-generated images or when an AI system is making hiring decisions, consumers deserve to know.
  • As AI continues to shape our workforce, it’s crucial we reimagine how AI literacy skills are taught to consumers and workers to ensure hard working Americans aren’t left behind and small businesses can continue to compete with large corporations.
  1. Data Privacy
  • Data is the essential building block of Generative-AI and Americans should be able to make informed decisions about the permissions they grant. A comprehensive national data privacy law will protect consumers, minimize the amount of unnecessary personal data collected and sold by private companies, and build consumer trust.
  1. International Coalitions
  • The U.S. should be the leader in developing international norms, agreements, and technical standards for AI so we can ensure they’re made with democratic values and individual freedom in mind. A global governance framework will allow American companies large and small to compete internationally under a single set of strong, consumer-oriented protections.

Hickenlooper’s AI framework and auditing proposal comes after he chaired two hearings on AI last year. In October, he chaired a hearing of the Senate HELP Committee’s Subcommittee on Employment and Workplace Safety to explore how employers and workers are preparing for the widespread integration of AI with our workforce. In September, Hickenlooper chaired a hearing of the Senate Commerce Committee’s Subcommittee on Consumer Protection, Product Safety and Data Security to discuss how to increase transparency in AI for consumers, identify uses of AI that are beneficial or “high-risk,” and evaluate the potential impact of policies designed to increase trustworthiness in the transformational technology.

Hickenlooper also recently introduced the bipartisan AI Research, Innovation, and Accountability Act (ARIA), which establishes several transparency measures for AI developers and deployers.

Full text of the speech below:

“I am going to talk a little about AI. As you have been discussing this weekend and in the past it’s not all that new. We’ve had Siri for quite a while, you give commands to your home devices.

“These are all forms of AI, but, there’s no doubt we’ve entered a new stage, a new life, a new iteration in the life of AI.

“The age of Generative AI is here and is putting the power of AI directly into the hands of everyday people and small businesses. That’s a revolution in itself.

“Generative AI will forever transform our economy and our daily lives. I am sure you saw, the World Economic Forum went into some detail and made assessments that AI could add trillions of dollars to the global economy in the near future.

“It’s not a given that AI will benefit everyone. Our workforce, as one example, could be enhanced significantly.

“A Stanford University study shows Generative AI holds the potential to increase worker productivity by more than 35% in neartime, by that they’re talking 5-10 years!

“Who receives compensation in that increase in compensation? Is that value of increased productivity really just reserved for owners and investors or is it shared with workers? Are there ways that we can make sure that workers share in the benefits of that productivity improvement? By letting the workers share in that we really strengthen the country, the benefits are shared more broadly.

“But that certainly isn’t going to happen on its own.

“If we’re not careful and don’t steer AI that way, it could actually end up displacing huge numbers of workers without taking into consideration what they will do next.

“That’s one instance of thousands of decisions being made today that are going to have consequences for generations.

“We’re at a historic inflection point with AI and society. And we need to be asking certain questions:

“Do we want to live in a world where Generative AI potentially displaces thousands of workers?

“Do we want our human creativity undercut by a Large Language Model?

“What rights should people have if they are harmed by a company’s Generative AI system?

“These should be answered by each one of us. And Congress, elected by the American people (and on occasion productive), should pass laws to carry out these decisions.

“Not the for-profit AI companies solely. And not the for-profit AI companies themselves.

“Look at social media companies today. We’ve largely let them regulate themselves.

“Sure, Congress has reacted here and there when the platforms have been especially egregious. But otherwise they’ve been pretty much left to their own devices.

“Families have suffered with loved ones killed by terrorists who are radicalized by endless YouTube recommendations.

“Young people who have taken their own lives after troubling content fed their worst thoughts, again and again. 

“Children – sometimes young children – exploited sexually on these platforms.

“And all the while, bills have been introduced in Congress and languished while the algorithms churn on, reshaping our reality.

“The biggest question we should be asking ourselves today is if we want to recreate the social media self-policing tragedy with AI.

“And whether these AI companies should be shielded from legal liabilities if they aren’t doing enough to prevent the harms their systems could create.

“We’re already seeing Generative AI be used to create voice cloning scams targeting seniors and kids.

“In New Hampshire, a robocall campaign targeted voters with a deepfake of President Biden’s voice and discouraged them from voting at all.

“Taylor Swift’s likeness was recently used for nonconsensual, deepfake pornography, which quickly spread across platforms like Twitter and was up and running for many hours before it was taken down. 

“We need America to be a global leader in AI for the sake of our economy, quality of life, and indeed our national security.

“But we have to balance AI innovation with preserving consumer privacy and limiting potential harms.

“That’s why we need a new framework to regulate AI, one for now we’re calling ‘Trust, but Verify: A Path to AI’s Promise’

“The framework has 3 areas AI regulation should focus on in the immediate term, the highest priorities:

1) Transparency & Literacy,

2) Data Privacy,

3) International Coalitions

“Addressing these three areas will ensure AI systems are more transparent about the data their models are trained on, how risks such as bias are mitigated, and how they keep our personal data secure.

“First: Transparency & Literacy. Transparency is key. We’ve seen social media companies leverage our data in ways we never imagined and certainly never consented to.

“And we’ve seen the harm that comes from that opacity, the murky lack of transparency.

“Consumers need to know if they are seeing AI-generated images in the news, or if an AI system is making hiring decisions about them.

“Of course, disclosure alone isn’t a silver bullet. But it’s the first of many steps we can take to promote transparency, protect artists, and mitigate misinformation.

“We’re not talking about tedious, tiny-lettered privacy disclosures. We’re talking about clear labels on images – readily identifiable.

“Literacy. We need to reimagine how AI literacy skills are being taught to consumers & workers.

“How will we make sure blue-collar workers who don’t have the time or perhaps the money to teach themselves are not left behind?

“How will we support the small business owners who want to integrate AI into their products and be able to compete with the bigger businesses?

“Second: Privacy. We have to recognize that the essential building block of Generative-AI is data.

“Generative AI trains on truly massive datasets, like every news article ever published and millions and millions of songs and videos.

“What is the role of copywriters in this brave new world and copyrights themselves?

“Americans should know where this data comes from! And be able to make informed decisions about the permissions they grant.

“We’re talking about real consent on how our data is used to create a product – not just quick pop-up windows.

“More importantly, consumers should decide how much an AI system knows about their own personal information and likeness.

“Part of a solution to this has to be a comprehensive data privacy law that will minimize the amount of unnecessary personal data collected and sold by private companies.

“A national privacy law would make the Federal Trade Commission and States Attorneys General across the country function like the ‘cops on the beat’ working on behalf of consumers.

“Companies developing AI systems – from OpenAI to Google and Anthropic – have ample opportunity to protect people’s privacy when they train, test, and release their models.

“It’s also good business for them! Consumers will trust products that they feel confident are built with their safety, security, and privacy in mind.

“So, Congress needs to fulfill our long-standing promise to pass comprehensive federal [privacy] legislation that protects consumers and spurs innovation, hand-in-hand with global developers.

“Which brings us to our third framework, Third: International Coalitions. We live in a global age, and AI will never be contained at national borders.

“The U.S. should be the leader in developing international norms, agreements, and technical standards for AI so we can ensure they’re made with democratic values and individual freedom in mind. 

“Global events like last year’s UK Safety Summit and the G7’s Hiroshima AI Process will help build consensus around our shared vision for safe innovation.

“Here in Colorado, NIST has been the ‘tip of the spear’ for AI safety in the U.S. through their AI Risk Management Framework and creation of an American AI Safety Institute.

“To continue our leadership on the global stage, it’s essential that Congress provides the necessary resources to NIST as they research and align their technical standards with the international community. 

“We can’t let AI companies be the only ones who really understand what they’ve created and the potential harms that could result.

“We can’t regulate what we don’t understand – a parallel to the old adage, ‘you can’t manage what you can’t measure.’

“A global governance framework will allow American companies large and small to compete internationally under a single set of strong, consumer-oriented protections.

“Mitigating harms from bad actors also relies on consistent and strong governance.

“Scammers will find the product with the weakest safeguards and exploit them without caring about where it was built.

The U.S. leads the world in innovation by encouraging free, fair, and open competition. We should bring our strategy of accountable innovation to our international partnerships.

“A level playing field globally will let the best ideas grow and thrive – and we can feel confident those ideas will generally come from the United States.

“Even before the EU AI Act or the Biden Administration Executive Order on AI, companies using AI have had to comply with existing laws that preserve consumer protection, civil rights, and our health and financial data privacy. 

“Today, in the absence of U.S. laws for AI, many companies are proactively and voluntarily conducting risk assessments to test their systems to prevent bias.

“But we can’t let an industry with so many unknowns and potential harms police itself.

“We need clear rules that we can rely on to prevent AI’s harms.

“And while those exact rules are still being discussed, what we do know is that in the long-term we cannot rely on self-reporting alone from AI companies on compliance.

“We should build trust, but verify.

“We need qualified third parties to effectively audit Generative AI systems and verify their claims of compliance with federal laws and regulations.

“How we get there starts with establishing criteria and a path to certification for third-party auditors.

“Let’s remember, auditing practices aren’t new: financial audits, IT audits, or general performance audits have existed for years.

“We don’t just take your word that you’re paying your taxes, we audit to make sure. Some would have us audit less, some more.

“A clear baseline for AI auditing standards can also prevent a race-to-the-bottom scenario, where companies just hire the cheapest third-party auditors to check off requirements.

“The inherent risks with Generative AI mean we cannot wait to have guardrails in place.

“If we miss this opportunity the consequences will shape generations to come. What begins today as Generative AI may one day become Artificial General Intelligence.

“A wild, unregulated AI industry – that is accountable to no one – developing Artificial General Intelligence should scare us all into action.

“On Friday, the EU released the compromised text, as I mentioned before, of the Artificial Intelligence Act. They had unanimous agreement, even the skeptics like France, Germany and Italy signed on. 

“There’s important work for all of us ahead. It’s going to take all of us, including Silicon Flatirons to establish and maintain American supremacy in AI responsibility.”

###

Recent Press Releases