Home / News / Technology / Scarlett Johansson’s ChatGPT Voice Scandal: OpenAI Admitting to Deepfake by Removing “Sky”?
Technology
6 min read

Scarlett Johansson’s ChatGPT Voice Scandal: OpenAI Admitting to Deepfake by Removing “Sky”?

Last Updated 5 hours ago
Samantha Dunn
Last Updated 5 hours ago
Key Takeaways
  • As AI technology becomes more sophisticated, it becomes more difficult to discern deepfakes from reality.
  • Celebrities and high-profile figures are the main targets of AI technology misuse.
  • Scarlett Johansson is the latest celebrity target of a deepfake – the perpetrator? OpenAI.

Hollywood star Scarlett Johansson has expressed her shock and anger after OpenAI launched a chatbot with a voice that bore an “eerily similar” resemblance to her own.

The actress revealed that she had previously declined an offer from OpenAI to voice its new chatbot, designed to read text aloud to users.

Scarlett Johannson’s Audio Deepfake

The recent deepfake controversy erupted when OpenAI’s new model, called Sky, debuted last week, prompting commentators to note the striking similarity between Sky’s voice and Johansson’s role in the 2013 film “Her.” In response to the backlash, OpenAI announced on Monday that it would remove the voice, though it insisted the resemblance was unintended.

Johansson, however, accused the company and its founder, Sam Altman, of deliberately mimicking her voice. In a statement  to the BBC, she detailed her reaction to the demo and Altman’s actions.

“When I heard the released demo, I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine,” Johansson stated. She added that Altman had implied the similarity was intentional by tweeting the word “her,” referencing her role in “Her.”

The actress recounted how Altman had approached her in September, suggesting that her involvement could help bridge the gap between tech companies and creatives. Despite his assurances that her voice would comfort users, Johansson ultimately declined for personal reasons. Just two days before Sky’s release, Altman made a final appeal through her agent, which Johansson also rejected.

OpenAI Apologizes

In light of the controversy, Johansson has hired legal representation and sent two legal letters to OpenAI to determine how the voice was created. She emphasized the importance of addressing issues related to deepfakes and the protection of personal likenesses.

OpenAI’s response  included a denial from Altman, who stated, “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

The company further clarified in a blog post  that the voices used by its chatbot were sampled from voice actors it had partnered with. As the debate over digital consent and AI technology continues, this incident underscores the need for clear regulations to safeguard individuals’ identities in the digital age.

Taylor Swift, Bobbi Althoff, Piers Morgan

While Scarlett Johansson’s alleged Audio deepfake centers on her voice, AI-generated videos can appear just as convincing.

Within a single week in October, high-profile individuals including actor Tom Hanks , journalist Gayle King , and YouTube sensation MrBeast , found themselves the unwitting faces of deceptive marketing campaigns.

The growing number of deepfake scandals involving celebrities like Taylor Swift, Bobbi Althoff, and Piers Morgan have brought further attention to the issue of digital consent and the effectiveness of social media moderation.

Bobbi Althoff Falls Victim to Deepfake Video

Bobbi Althoff, the host of The Really Good Podcast, recently became the target of a deepfake video that depicted her in a sexually explicit manner, quickly spreading across the social media platform X.

Despite X’s policies against such content, the video remained accessible for nearly a day, highlighting the challenges platforms face in enforcing their own rules. Althoff took to Instagram to deny the authenticity of the video, emphasizing its AI-generated nature and expressing her shock at the realization that people believed it was real.

Taylor Swift’s Deepfake Incident Spurs Action on X

Following a similar incident involving Taylor Swift, where deepfake images of the singer circulated on X for 19 hours, the platform took the drastic step of blocking all search terms related to Swift. This action, described as a “blunt” moderation tactic, was intended to prevent further spread of the AI-generated images.

The move has been met with mixed reactions, with some criticizing it as reactive rather than proactive, and others questioning the effectiveness and ethical implications of such heavy-handed moderation strategies.

Piers Morgan and Oprah Winfrey Misrepresented in Deepfake Ads

Piers Morgan and Oprah Winfrey were also embroiled in deepfake controversies, with their likenesses being used without consent in advertisements for a controversial self-help course.

These manipulated videos, which appeared on platforms like YouTube, Facebook, and Instagram, falsely portrayed Morgan and Winfrey endorsing the course, showcasing the potential for deepfakes to mislead the public and infringe on the rights of individuals.

Beyond Celebrity Deepfakes

Celebrities and high-profile individuals are the most obvious targets for deepfakes, however, the ease with which this technology can be accessed and manipulated means that a broad spectrum of society is also at risk of AI misuse.

Social media is often where such deepfakes are shared. Co-founder and CEO of LayerZero Labs Bryan Pellegrino informed  his followers on Twitter that there was a deepfake of him circulating.

The use of deepfakes and AI manipulation in politics also poses ethical issues for politicians and lawmakers.

A seasoned Democratic consultant was identified as the architect behind a controversial robocall campaign that mimicked the voice of President Joe Biden, stirring a nationwide conversation on the ethics of artificial intelligence in political campaigning.

In an attempt to beef up cybersecurity measures, The House of Representatives Cybersecurity Office has banned staffers from using Microsoft Copilot and says it is working on a more secure version of the chatbot for government use.

Was this Article helpful? Yes No