Meta is deploying facial recognition technology to fight the infestation of celebrity deepfake scam advertisements on Facebook and Instagram.
The move comes as the social media giant battles criticism over the rising number of AI-powered deepfakes on its platform. However, experts remain skeptical about the safety of the firm’s development of facial recognition technology.
Beginning in December 2024, Meta’s new trial will use 50,000 celebrities to reveal advertisements impersonating their likeness.
When the technology finds an advertisement with one of the celebrities, it will compare the video or image to Facebook and Instagram profile pictures.
Meta will delete the advertisement if the two pieces of content match up and it is detected as a scam.
Meta’s motivation is to boost the speed at which they can tackle the growing number of AI-powered scams on their platforms.
Monika Bickert, Meta’s VP of content policy, wrote in a blog post that early testing with a small group of celebrities and public figures had shown promising results in increasing the speed at which it could detect and enforce against deepfake scams.
Experts remain skeptical about Meta’s facial technology, rooted in concerns surrounding users’ privacy and security.
In an independent project designed to show how easily bad actors can misuse facial recognition, two Harvard students turned Meta’s smart glasses into a tool for undercover monitoring.
Anh Phu Nguyen and Caine Ardayfio used a pair of Ray-Ban Meta smart glasses and public databases to identify passersby in real-time.
The two students could find out individuals’ names, phone numbers, and addresses through a linked AI program and public databases.
According to research from Insight Partners, the global facial recognition market is forecasted to reach $12.67 billion by 2028 , up from $5.01 billion in 2021.
Keiichi Nakata, a Professor of Social Informatics at Henley Business School, told CCN that the way facial recognition technology is collected and stored remains a major issue.
“Facial recognition technology uses personal data that cannot be altered – compared to PIN numbers for personal identification that can be more easily changed,” Nakata said.
“Ethical concerns include how the data is collected, managed, and stored, and how these are used – for example, are they used in the way that is acceptable to users and in a responsible manner?”
The social media giant faces increasing pressure from lawmakers to tackle the rising number of scams on its platforms. High-profile celebrities such as Brad Pitt, Cristiano Ronaldo, and Taylor Swift have appeared in fabricated endorsements across Facebook, Instagram, and Messenger.
In 2021, Meta announced it would significantly scale back its use of facial recognition technology due to mounting criticism and concerns over privacy and ethical issues.
“There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate,” Meta said in a statement.
The decision came after years of backlash from privacy advocates and lawmakers over the potential for misuse of the technology.