X, formerly Twitter, has blocked searches of singer Taylor Swift after sexually explicit AI-generated images of the star went viral on the platform.
The platform’s content moderation policies have been criticized after the AI images of Swift were viewed millions of times before removal.
All search terms for the multiple Grammy-winning singer were blocked over the weekend, in what has been described as a “blunt” moderation tactic. Searches for the singer returned the same message: “Something went wrong. Try reloading”. This drastic action follows the viral spread of sexually explicit AI-generated images of the star, which has raised concerns about the misuse of AI technology.
The decision to block searches came after the AI-generated images had already garnered millions of views, leading to criticism that Platform X’s response was reactive rather than proactive. The platform’s attempts to control the situation has sparked a debate over the effectiveness and ethical implications of such heavy-handed moderation strategies.
X’s safety team released a statement on Friday addressing how they were dealing with the situation. However, following the rapid proliferation of these AI images across X, the social media platform made the decision to block searches of the US singer on Sunday.
Deep-fake images of Taylor Swift were reportedly live on the social media platform for 19 hours before the account that shared the images was suspended.
While X’s response might provide temporary relief, it raises questions about the long-term strategy for managing such content. Joe Benarroch, head of business operations at X, stated, “This is a temporary action and done with an abundance of caution as we prioritize safety on this issue”.
A statement from SAG-AFTRA, the Screen Actors Guild – American Federation of Television and Radio Artists, addressed the AI incident:
“The sexually explicit, A.I.-generated images depicting Taylor Swift are upsetting, harmful, and deeply concerning. The development and dissemination of fake images — especially those of a lewd nature — without someone’s consent must be made illegal. As a society, we have it in our power to control these technologies, but we must act now before it is too late.”
The actions by Elon Musk’s X highlight the challenges social media platforms face in moderating content effectively, especially in the context of AI technologies. Critics argue that while the platform’s reactive response may mitigate immediate harm, it fails to address the underlying issues associated with AI-generated fake content.
Users on X have shared their views on what is allowed on the platform, with one user criticizing the platform for allowing pornographic content.
Celebrity image manipulation has been around for some time, however deep-fake technology blurs the lines between what is real and what has been artificially constructed.
Regulation isn’t being developed quickly enough to keep up with rapid AI advancements. By creating realistic but entirely fabricated images or videos, DeepFakes have the potential to harm individuals’ reputations, manipulate public opinion, as well as erode trust in media.
The White House shared an official statement specifically addressing the dangers of Deep Fakes, and sharing their commitment to addressing the problems of fake AI images.