Key Takeaways
The emergence of easy-to-use AI deepfake generators has created new forms of harassment and sexual abuse in schools. But so-called “nudify apps” aren’t solely responsible for the worrying trend. Distribution channels have also caught the attention of authorities looking to crack down on illegal deepfakes in schools, where Telegram has emerged as a popular platform for sharing sexually explicit AI-generated content.
The issue was highlighted in South Korea recently after a concerned high school student published a map showing schools and universities where students are suspected of participating in Telegram channels used to share explicit deepfake imagery.
Public outrage over deepfake pornography in South Korea was first ignited earlier this year by research showing that 53% of all individuals depicted globally are South Korean celebrities.
The report by Security Hero found that 8 out of the top 10 people most frequently depicted in deepfake pornography worldwide were South Korean singers. As previously established by other studies, it observed deepfake abuse almost exclusively targets women and girls, with 99% of AI-generated pornographic content featuring female subjects.
The issue gained mainstream media attention in August when the Hankyoreh newspaper reported the story of a university student’s quest for justice after her classmates used AI to produce sexually explicit content with her likeness. Three years after she first discovered the crime, in May 2024 two perpetrators were arrested and charged with sexual crimes.
The victim’s statement to the court described the suffering she has experienced as a result of the deepfakes, which has led to her being treated for post-traumatic stress disorder. Calling for the judge to impose “the most severe punishments available” for the crimes, she emphasized:
“We don’t exist to satisfy somebody’s sexual urges. We’re dignified human beings […] We must not condone [the perpetrators of deepfake abuse] because they undermine trust in our society and devastate the lives of their victims.”
In the wake of this and similar stories, the viral deepfake map alleges that over 500 schools and universities have students involved in Telegram groups dedicated to sharing such illegal content.
Amid the spiraling controversy, on Monday, Sept. 2, South Korean police initiated an investigation into Telegram over its role in the distribution of sexually explicit deepfake content.
While South Korea has witnessed an explosion of deepfake chatrooms on Telegram, it isn’t the only country where the platform is used to share deepfake pornography.
In 2020, the security firm Sensity reported on a Telegram bot that was used to generate over 100,000 non-consensual explicit images. The bot was found to be especially popular in Russia and Eastern Europe, which accounted for 70% of users.
Telegram’s privacy features, which include the option to encrypt messages, allow users to hide their identities while making it harder for law enforcement to discover and shut down channels. And while there are other apps that offer a similar level of anonymity, with 950 million monthly active users worldwide, Telegram is by far the most popular.
The company that operates the platform insists it works with law enforcement to shut down offending channels. But it has increasingly come under fire for failing to prevent the spread of illegal content.
French authorities indicted founder and CEO Pavel Durov over allegations that Telegram facilitated drug trafficking and the distribution of child sexual abuse material and drug trafficking. The company was also accused of failing to comply with law enforcement requests for information.
Amid rising concerns about an epidemic of non-consensual sexually explicit deepfakes, lawmakers around the world have moved to crack down on the practice.
In South Korea, the government has pledged to pursue tougher enforcement and harsher sentences for offenders.
Meanwhile, politicians in the UK have debated banning nudification software. In the US, Congress recently passed the DEFIANCE Act, allowing victims of deepfake sexual abuse to sue creators, distributors and recipients.