In May 2024, Hollywood star Scarlett Johansson expressed her shock and anger after OpenAI launched a chatbot with a voice that bore an “eerily similar” resemblance to her own.
The actress revealed that she had previously declined an offer from OpenAI to voice its new chatbot, designed to read text aloud to users.
Now, Johansson has once again lambasted OpenAI CEO Sam Altman, saying her experience was “so disturbing” and she was “so angry” after the company seemingly mimicked her voice for its ChatGPT system Sky.
The AI voice controversy erupted when OpenAI’s new model, Sky, debuted in May 2024. Commentators noted the striking similarity between Sky’s voice and Johansson’s role in the 2013 film Her. In response to the backlash, OpenAI announced that it would remove the voice, though it insisted the resemblance was unintended.
Johansson, however, accused the company and its founder, Sam Altman, of deliberately mimicking her voice. At the time, she released a statement to the BBC, detailing her reaction to the demo and Altman’s actions.
“When I heard the released demo, I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine,” Johansson stated. She added that Altman had implied the similarity was intentional by tweeting the word “her,” referencing her role in “Her.”
In a July 2024 interview with the New York Times , Johansson described deepfakes as a “dark wormhole that you can never get yourself out of”. She touched on her fears for the future, noting:
“I think technologies move faster than our fragile human egos can process it, and you see the effects all over, especially with young people. This technology is coming like a thousand-foot wave.”
The actress recounted how Altman had approached her in September, suggesting that her involvement could help bridge the gap between tech companies and creatives. Despite his assurances that her voice would comfort users, Johansson ultimately declined for personal reasons. Just two days before Sky’s release, Altman made a final appeal through her agent, which Johansson also rejected.
Following the controversy, Johansson hired legal representation and sent two legal letters to OpenAI to determine how the voice was created. She emphasized the importance of addressing issues related to deepfakes and the protection of personal likenesses.
OpenAI’s response included a denial from Altman, who stated, “The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”
The company further clarified in a blog post that the voices used by its chatbot were sampled from voice actors it had partnered with. As the debate over digital consent and AI technology continues, this incident underscores the need for clear regulations to safeguard individuals’ identities in the digital age.
While Scarlett Johansson’s alleged Audio deepfake centers on her voice, AI-generated videos can appear just as convincing.
Within a single week in October 2023, high-profile individuals, including actor Tom Hanks , journalist Gayle King , and YouTube sensation MrBeast , found themselves the unwitting faces of deceptive marketing campaigns.
The growing number of deepfake scandals involving celebrities like Taylor Swift, Bobbi Althoff, and Piers Morgan have brought further attention to the issue of digital consent and the effectiveness of social media moderation.
Bobbi Althoff, the host of The Really Good Podcast, became the target of a deepfake video that depicted her in a sexually explicit manner, quickly spreading across the social media platform X.
Despite X’s policies against such content, the video remained accessible for nearly a day, highlighting the challenges platforms face in enforcing their own rules. Althoff took to Instagram to deny the authenticity of the video, emphasizing its AI-generated nature and expressing her shock at the realization that people believed it was real.
Following a similar incident involving Taylor Swift, where deepfake images of the singer circulated on X for 19 hours, the platform took the drastic step of blocking all search terms related to Swift. This action, described as a “blunt” moderation tactic, was intended to prevent further spread of the AI-generated images.
The move has been met with mixed reactions, with some criticizing it as reactive rather than proactive, and others questioning the effectiveness and ethical implications of such heavy-handed moderation strategies.
Piers Morgan and Oprah Winfrey were also embroiled in deepfake controversies, with their likenesses being used without consent in advertisements for a controversial self-help course.
These manipulated videos, which appeared on platforms like YouTube, Facebook, and Instagram, falsely portrayed Morgan and Winfrey endorsing the course, showcasing the potential for deepfakes to mislead the public and infringe on the rights of individuals.
Celebrities and high-profile individuals are the most obvious targets for deepfakes, however, the ease with which this technology can be accessed and manipulated means that a broad spectrum of society is also at risk of AI misuse.
Social media is often where such deepfakes are shared. Co-founder and CEO of LayerZero Labs Bryan Pellegrino informed his followers on Twitter that there was a deepfake of him circulating.
A seasoned Democratic consultant was identified as the architect behind a controversial robocall campaign that mimicked the voice of President Joe Biden, stirring a nationwide conversation on the ethics of artificial intelligence in political campaigning.
In an attempt to strengthen cybersecurity measures, the House of Representatives Cybersecurity Office has banned staffers from using Microsoft Copilot and says it is working on a more secure version of the chatbot for government use.