Press Releases

Following Taylor Swift Deepfakes, Hickenlooper Demands Social Media Companies Respond to User Privacy, Harassment Complaints

Feb 1, 2024

Pressure comes after X took 17 hours to take down violent AI-generated images of Taylor Swift, shining light on countless users who endure digital harassment without sufficient mitigation

WASHINGTON – Today, U.S. Senator John Hickenlooper, Chair of the Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security, sent a letter to the CEOs of X and Meta calling on their social media platforms to respond quickly, compassionately, and thoroughly to complaints about digital content that poses serious risks to user’s personal privacy, mental health, or physical safety. 

“Americans demand, but rightly question your level of commitment to help adults and children going through a moment of crisis online,” Hickenlooper wrote in the letter. “But what happens to those people who do not have Taylor Swift’s reputation and reach? How can we ensure that every American is given the same care and attention when they demand their images are taken down by platforms?

Hickenlooper continued: “Today’s model of self-policing for online platforms is not enough to avoid putting people’s children, teenagers, and loved ones at risk. Failure to address deeply troubling and personal violations of a person’s bodily autonomy, or personal reputation can have devastating – even life-threatening – consequences. This is an unfettered epidemic that impacts everyday Americans across the nation and cannot continue.”

Growing disappointment in X and Meta’s prior attempts to moderate content and protect minors against high-risk content has exposed gaps in their platforms that have tragically affected users. Hickenlooper called for these social media companies to treat these cases with the utmost importance with timely responses, investigations, and resolutions in response to their grievances.

The letter comes after an alarming incident last week when AI-generated pornographic images of pop superstar and 12-time Grammy Award winner Taylor Swift went viral on X, receiving more than 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks while they were shared across the internet. These nonconsensual, misogynistic AI-generated images remained on the platform for 17 hours before they were removed and the user was suspended. While the incident drew national attention, there are a growing number of cases where ordinary people are suffering devastating emotional and societal harm due to social media platforms not sufficiently, urgently, or appropriately responding to the complaints. 

Hickenlooper is a cosponsor of the bipartisan Kids Online Safety Act, which will protect children from harmful content online and hold social media companies accountable. Hickenlooper is a co-author of the bipartisan Artificial Intelligence Research, Innovation, and Accountability Act, which will develop standards to identify AI-generated content and create accountability for companies developing generative AI systems.

For full text of the letter, see HERE or below:

Mr. Zuckerburg and Ms. Yaccarino,

Once an individual’s innocence is lost, it cannot be repaired or regained. The same goes for our sense of safety and privacy once they are violated. As we painfully heard through the shocking testimony in the Senate this week, individuals, particularly our children, too often suffer because the safeguards intended to protect them online are too weak. Today’s model of self-policing for online platforms is not enough to avoid putting people’s children, teenagers, and loved ones at risk. Failure to address deeply troubling and personal violations of a person’s bodily autonomy, or personal reputation can have devastating- even life-threatening- consequences. This is an unfettered epidemic that impacts everyday Americans across the nation and cannot continue.

Last week, Artificial Intelligence (AI) generated images of pop artist Taylor Swift were circulated widely online. These images were nonconsensual, pornographic, and quickly gained virality on the platform X before spreading to Meta-owned Facebook and Instagram, and other platforms across the internet. The Verge reported that one image in particular “attracted more than 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks” on X before the user account that shared the image was suspended. It took 17 hours before this harmful content was ultimately removed. This incident drew national attention, particularly from Taylor Swift fans who flooded the social media site with hashtags to drown out the fake images and push the platform to take immediate action

But what happened to Taylor Swift is not a one-off: it has drawn national attention to a plague that is harming women and children in our country and will only get worse if left unchecked. As generative AI tools become increasingly powerful and easy to use, more people will be capable of producing this harmful media. Unfortunately, your platforms appear ill-prepared in the face of more attempts by bad actors to spread fake and disturbing content. From celebrities to young people, deepfake pornography causes devastating emotional and societal harms long after these posts are taken down from your platforms. This was a violent, misogynistic image that should never have been made or circulated. In this case, because of Swift’s celebrity and status, platforms responded relatively quickly to take it down. But what happens to those people who do not have Taylor Swift’s reputation and reach? How can we ensure that every American is given the same care and attention when they demand their images are taken down by platforms?

How many lives can be affected in 17 hours? For a high-profile individual like Taylor Swift, remediation took one day. Imagine if this took years.

There is a growing record of public cases where serious safety risks were not addressed appropriately. A 17-year-old actress reported fake pornographic images depicting her to X, but the content remains on the platform more than a month later. In the case of Doe v. Twitter, the family of a 16-year-old boy reported that sexual images of him from age 13 had resurfaced online. It was only once the Department of Homeland Security stepped in that the problematic videos of him were removed. Everyone should feel heard when they raise a complaint with a platform. Responses to minors’ safety must meet a much higher sense of urgency.

Americans demand, but rightly question your level of commitment to help adults and children going through a moment of crisis online. When you receive a report from a minor or their family about a serious risk to their personal privacy, mental health, or physical safety, we expect you to respond quickly and compassionately. The responses should include what steps you are taking to address their concerns and when you will complete the work. After a thorough investigation, you should inform the person who filed the report about what action you took, and how they can escalate their case if they do not feel the response was sufficient. And this should all happen quickly: lives and reputations are on the line. Sadly, this is not common practice. It is well past time for Congress to compel your platforms to protect people on your platform.

These are common-sense steps you can take to ensure that the worst cases of abuse, harassment, and bullying do not continue to plague our nation. We are disappointed that your prior investments in trust and safety have left these gaps for the public, the media, and Congress to identify for you.

Please provide us detailed responses to the following questions:

  1. What amount of time does it take, on average, for your platform to respond to a request to remove a deepfake or non-consensual explicit image portraying them? Is this outreach automated, conducted by a trust & safety professional, or managed by another entity?
  2. What near-term steps will your platform take to respond to complaints about harmful deepfakes or non-consensual, explicit images that could damage an individual’s reputation?
  3. How does your platform provide transparency into the investigation and case resolution process?
  4. What options are available to somebody who requests removal of a deepfake or non-consensual explicit image portraying them if they believe their case was not properly resolved?  

We hope your attention to this matter will ensure you can live up to your mission to connect people and provide meaningful, safe, and positive experiences online.

Sincerely, 

 ###

Recent Press Releases