Home / News / Technology / Jenna Ortega AI Deepfake Attack: Meta Incapable of Stopping Doctored Images?
Technology
4 min read

Jenna Ortega AI Deepfake Attack: Meta Incapable of Stopping Doctored Images?

Last Updated March 6, 2024 2:28 PM
James Morales
Last Updated March 6, 2024 2:28 PM

Key Takeaways

  • Jena Ortega is the latest celebrity victim of online deepfakes.
  • An image of Ortega was used to promote Perky AI, an app that generates sexually explicit deepfakes.
  • The ad appeared on Facebook and Instagram, raising questions about Meta’s ability to police deepfake abuse.

Jena Ortega has become the latest victim of an alarming rise in sexually explicit deepfakes after her image was used to promote Perky AI, an app that uses artificial intelligence to generate fake nude images.

The online ads appeared on Facebook and Instagram and were only withdrawn by Meta after NBC News alerted the company to their presence. Meta’s failure to prevent doctored images of Ortega from circulating on its advertising platform raises questions about its ability to police deepfake abuse across its social media networks.

Jenna Ortega Deepfakes Part of Larger Abusive Trend

While Ortega is hardly the first celebrity whose image has been used in sexually explicit deepfakes, the recent incident highlights the challenge online platforms face amid an unprecedented surge in the misuse of AI image- and video generators.

According to one recent study , the total number of deepfake videos online increased by 550% between 2019 and 2023. Of these, 98% included sexually explicit material, with  94% of all deepfake pornography targeting women in the entertainment industry.

Are Deepfakes Illegal? Updating Laws for the Age of AI

In the US, nearly all states have criminalized the distribution of intimate images without consent. But existing “revenge porn” laws don’t necessarily cover AI-generated content.

So far, at least 10 states have passed legislation aimed at deepfake abuse. However, there is currently no federal law against disseminating such content. Considering that the image of Ortega used by Perky AI was taken when she was 16, child pornography laws could also come into play. 

In some cases, such as the UK’s Online Safety Act , legislation could make Meta (or other companies in the same boat) criminally liable if courts deemed that it hadn’t done enough to prevent illicit deepfakes from circulating on its platforms. 

However, some lawmakers in the country have called for the government to go even further and ban apps like Perky AI entirely.

Since the Jenna Ortega deepfake controversy was first reported, the ads in question have been pulled from Facebook and Instagram. But the issue they highlight remains a concern.

CCN reached out to Meta to ask what it was doing to prevent such instances from occurring again. At the time of writing, however, the firm had not responded. 

What Can Social Media Platforms Do?

Unfortunately, there is no simple fix for the issue of AI deepfake abuse. 

Every major social media platform has policies in place prohibiting the dissemination of nonconsensual sexually explicit images and videos. There is no lack of rules, only a problem with enforcing the ones that already exist.

In a recent blog post , Meta’s head of global affairs Nick Clegg acknowledged that the company lacked the technology to completely stamp out malicious AI-generated videos.

Commenting on Meta’s new “Imagined by AI” label, he said the latest tool “represents the cutting edge of what’s technically possible right now. But it’s not yet possible to identify all AI-generated content.”

Although Clegg framed Meta’s efforts in the context of a surge in deepfakes designed to manipulate voters in the run-up to elections, the firm faces similar challenges in the prevention of AI-powered online abuse.

Was this Article helpful? Yes No