Key Takeaways
Face swap techniques have reportedly become the go-to choice for those seeking to create deceptive and misleading media, having shown a staggering growth of 704%, over the past year. This technology allows users to seamlessly replace one person’s face with another in videos and photos, making it challenging to discern real from fake content.
The findings come from iProov, a UK-based provider of biometric verification and authentication that associated the rise with the increased availability of generative artificial intelligence (AI) tools.
Face swaps have taken the world by storm. With the advent of sophisticated AI and editing software, it has become easier than ever to create convincing deepfakes. This trend does not impact entertainment; it has implications for privacy, security, and the integrity of digital content.
Just recently, Taylor Swift has fallen victim to deepfake technology, where her likeness was superimposed in videos she purportedly never participated in. As a result, X, formerly Twitter, blocked searches of the singer, after sexually explicit AI-generated images of Swift went viral on the platform.
Another illustrative example is Morgan Freeman, an iconic actor whose distinctive voice, makes him a popular target for deepfake technology. In 2022, a video of “Morgan Freeman” telling people to question reality became viral on Twitter and sparked a broad discussion about the ethical implications of using someone’s likeness without their consent.
These recent examples demonstrate the impressive advancements in AI and machine learning techniques but raise concerns about the potential for misinformation. Commenting on the matter, Jesse McGraw, an ethical hacker and public speaker, told CCN:
“My team and I have been investigating AI-based identity fraud, where threat actors are abusing AI cloning technology to impersonate a victim’s voice for malicious purposes. The good news is there are free tools available for analyzing and flagging potential voice cloning abuse. Similarly, AI technology is also being abused to produce deepfake images for the purpose of creating revenge porn or other embarrassing or sensitive deepfake media.”
“This is trending, and I am predicting a significant rise in AI identity abuse. This is because the tools to produce the content are readily available to the public. This has already created a challenge to industry makers who produce privacy and authentication technology because security standards will likely rise to meet the challenge by presenting some form of biometric authentication standard.” McGraw added.
McGraw’s statement aligns with iProov’s conclusion that “on-premise biometric solutions deployed just weeks ago risk becoming obsolete the moment a threat actor or vector is successful. This victory will be quickly shared via their communities, and within hours, a system could fall victim to multiple well-targeted attacks.”
It is worth noting that there has been a surge in the number of threat groups engaged in exchanging information related to attacks against biometric and video identification systems. 47% of the identified groups were established in 2023, according to the report.
Earlier this month, Democratic Minority leader Vic Miller and Republican Rep. Pat Proctor proposed a bill aimed at stopping people from using AI to create and disseminate deepfakes of candidates and public officials in political advertisements.
Last summer, Kansas City established related guidelines designed to help mitigate risks associated with AI. The legislation reportedly received support from businesses, with Yacine Jernite, Machine Learning & Society Lead at tech company Hugging Face, saying:
“Beyond its direct impact on the use of AI technology by the Federal Government, this will also have far-reaching consequences by fostering more shared knowledge and development of necessary tools and good practices. We support the Act and look forward to the further opportunities it will bring to build AI technology more responsibly and collaboratively.”
As AI technology advances, the introduction of ethical guidelines and robust detection methods becomes increasingly important to combat potential AI misuse.