On Tuesday, March 4, Meta confirmed that its Facebook and Instagram facial recognition tech to spot scam celebrity adverts will be launched in the U.K. and the EU.
Having successfully worked with regulators to allow the feature, Meta’s move has highlighted Britain’s continued concerns surrounding widespread surveillance.
Beginning testing in 2023, Meta’s facial recognition technology is able to identify scams involving well-known figures.
Once identified, the firm’s facial recognition tools compare the imagery in the advertisement against real photos from the celebrity’s official profiles.
Meta said the technology combats the rising issue of celebrity deepfakes, where advertisements make it look like a celebrity is endorsing a product.
“We’re constantly working on new ways to keep people safe while keeping bad actors out, and the measures we’re rolling out this week utilize facial recognition technology to help us crack down on fake celebrity scams,” said David Agranovich, Facebook’s Director for Global Threat Disruption.
The Facebook owner said it first encrypts the data and immediately deletes it after making the match. This ensures no facial recognition data is stored beyond its immediate use.
China’s approach to surveillance has evolved into one of the most comprehensive and technologically advanced systems globally.
An estimated 700 million cameras are installed in the country, many of which are fitted with sophisticated facial recognition technology.
A mix of internet surveillance, data analysis and extensive facial capture means Chinese individuals are constantly being watched.
Surveillance data feeds into government platforms that utilize AI to analyze and predict potential threats, leading to actions such as detentions based on behavioral patterns.
Critics within the U.K. and U.S. have raised concerns about the inclusion of this style of surveillance in the West.
In 2023, the U.K. government was accused of using King Charles’ Coronation to stage the country’s largest-ever facial recognition operation.
The event, which brought hundreds of thousands of people onto London’s streets, was captured by a new suite of Chinese-made Hikvision AI cameras.
These high-tech cameras, which are widely used across Britain, can scan tens of thousands of people’s faces at a time.
“The use of such surveillance technology means that hundreds of thousands of people were not only part of a once-in-a-generation, historic event but part of a high-tech police line-up,” Big Brother Watch Director Silkie Carlo wrote in The Times .
“Some are reassured that the police are not, yet, using the AI technology to record your identity if you’re not of interest to them, and that the highly sensitive biometric data the algorithm takes from your face is soon discarded unless you’re flagged,” Carlo added.
“The notion that if one has ‘nothing to hide’ one has ‘nothing to fear’ from intrusion has never fit in a free country,” Carlo concluded.
In 2023, amid rising tensions between China and the West, the U.K. announced it was removing Hikvision cameras and other Chinese-made surveillance equipment from sensitive areas.
The government defines ‘sensitive’ sites as locations that routinely handle secret material, house officials with high-level security clearances, or are frequently used by ministers.
The U.K. has experienced a series of new developments surrounding facial recognition that have raised further concerns from privacy advocates.
Last week, the government proposed a bill that would give law enforcement access to every driver’s data held by the Driver and Vehicle Licensing Agency.
This follows a previous Conservative government bill that proposed allowing police to check burglars and shoplifters caught on camera through other databases using facial recognition tech.
Madeleine Stone, the senior advocacy officer at Big Brother Watch, a non-profit that campaigns against the use of facial recognition in the U.K., said the bill would put innocent citizens at risk.
“It’s disturbing to see the Government is reheating the Conservative’s abandoned plans that most threaten privacy rights, including granting all police forces access to our driving license photos, opening the door to the creation of a massive facial recognition database,” Stone told The Telegraph .
“Not only would this be an unprecedented breach of privacy, but would also put innocent citizens at risk of misidentifications and injustice,” she added.
In 2024, the Metropolitan Police escalated its use of live facial recognition systems. LFR was reportedly deployed 117 times between January and August, a substantial increase from 32 deployments over the previous four years.
This surge led to the scanning of around 770,966 faces, resulting in over 360 arrests.
Despite rising criticism, a 2023 YouGov poll found that 57% of British adults backed law enforcement using live facial recognition technology in public spaces, with only 28% opposing it.
Social media platforms will also soon be urged by the British communications watchdog to deploy “highly accurate” facial recognition checks to stop underage children from accessing their platforms.
In December, Ofcom announced it would recommend that social media companies use facial technology to determine a user’s age in guidance due to be published in April.
Ofcom’s head of online safety policy, Jon Higham, told The Telegraph that it “doesn’t take a genius to work out that children are going to lie about their age.”
“We will expect the technology to be highly accurate and effective. We’re not going to let people use poor or substandard mechanisms to verify kids’ ages.
“The sort of thing that we might look to in that space is some of this facial age and estimation technology that we see companies bringing in now, which we think is really pretty good at determining who is a child and who is an adult.”