A fake sexually explicit video featuring podcast host Bobbi Althoff has become a viral sensation on the social media platform X, challenging the platform’s ability to enforce its own rules against such content.
Despite X’s stated policies, the video of Althoff remained accessible for nearly a day, with new posts continuing to emerge.
Host of the popular podcast The Really Good Podcast Bobbi Althoff is the latest victim of an AI-generated Deepfake video that made the rounds on Twitter.
The video went viral on X after being shared by an account and swiftly circulated across social media. Althoff responded to the video on Instagram :
“Hate to disappoint you all, but the reason I’m trending is 100% not me & is definitely AI generated.”
In follow-up stories, Althoff remarked:
“I felt like it was a mistake or something, that it was bots or something. I didn’t realize that it was people actually believing that that was me until my whole team called me and were like, “is this real?”
Taylor Swift was another victim of a Deepfake earlier this month. The deepfake images were live on the social media platform for 19 hours before the account that shared the images was suspended. The US singer has still not publicly commented on this.
Deepfake pornography depicting non-consenting adults has surged over the past few years, building on increasingly advanced AI capabilities to create non-consensual deepfake videos.
Industry leaders have come together to sign an open letter urging heightened regulatory measures to combat deepfake threats. With over 800 signatures the letter calls for urgent calls for intervention from governments.
The signees include presidential candidates, tech CEOs, and even the so-called “AI godfather” Yoshua Bengio.
The rapid proliferation of content across social media makes the detection and termination of accounts and posts that share deepfakes particularly difficult. The ability for anyone to comment and share on social media also opens up other issues. Engagement that produces virality can be a source of motivation for individuals to continue to share and disseminate these high-engagement posts.
CCN reached out to X but has not received any comment.
The non-consensual nature of these videos is exacerbated by the difficulty in identifying what is real and what is AI-generated. These concerns have been highlighted recently in political discourse, with calls from officials and politicians to regulate this area following fears that deepfakes could be used to manipulate voters.
OpenAI made a landmark decision when it revealed it would be banning the use of AI to build applications for political campaigns and lobbying.
While non-consensual deepfake pornography videos appear to violate X’s non-consensual nudity policy , many have criticized the social media platform for being slow to respond, and often ineffective at curbing these types of videos.
X’s policy does not explicitly include deepfake pornography within its non-consensual policy, however, these types of videos fall within its non-permitted images or videos.
“Under this policy, you can’t post or share explicit images or videos that were taken, appear to have been taken or that were shared without the consent of the people involved.”
The rapid spread of the Althoff deepfake underscores the difficulty social media platforms face in detecting and removing deepfake content, and the need for increased cybersecurity and AI detection to protect social media users from AI manipulation.