Home / News / Technology / Mary Nightingale “Livid” After Her Likeness Used in Deepfake Scam
Technology
3 min read

Mary Nightingale “Livid” After Her Likeness Used in Deepfake Scam

Last Updated March 27, 2024 1:47 PM
James Morales
Last Updated March 27, 2024 1:47 PM

Key Takeaways

  • An AI deepfake video shows the ITV News presenter Mary Nightingale endorsing a scam investment scheme.
  • Speaking out against the use of her likeness in such a way, Nightingale said it threatened to undermine her audience’s trust.
  • Even large businesses have been scammed by AI deepfakes, with one Hong Kong firm recently hit for $25 million.

ITV News journalist and presenter Mary Nightingale has become the latest public figure to speak out against the use of her likeness by deepfake scammers.

After an AI-generated video purporting to show her promoting a fraudulent investment app appeared on social media in February, Nightingale said the deepfake scam amounted to identity theft.

British Presenter Deepfaked for Social Media Scam

Commenting on the deepfake video recently, Nightingale described  how it imitated how she appears on ITV News:

“It was me sitting at my usual desk in my usual seat with the same camera shot as usual recommending that people invest their money in this hair-brained scheme.”

Continuing, she expressed concern that such videos could erode the public’s trust in news journalism.

“If people don’t trust you, you cannot do your job,” she proclaimed. “So for someone to take my image and my voice and manipulate it in that way, it made me absolutely livid.”

Generative AI Enhances Fake Endorsement Scams

While deepfake scams are a relatively new phenomenon, erroneously suggesting the endorsement of public figures to market dodgy investment opportunities is a well-established fraud tactic.

For instance, Apple co-founder Steve Wozniak is currently suing Google over videos that appeared on YouTube that used his name and image to peddle a Bitcoin giveaway scam.

While those videos were less sophisticated than today’s AI-powered celebrity impersonations, the underlying tactic remains the same. But when it comes to identifying a scam, contemporary AI-generated video can be far more effective at assuaging people’s suspicions.

So how can investors protect themselves?

Mitigating Deepfake Fraud

With deepfake fraud on the rise, experts have recommended a number of measures businesses and individuals can take to mitigate risk.

In an interview with CCN, Kroll managing director Ken Joseph observed a continuity between scams like the one featuring Nightingale and high-level threats affecting even the largest businesses.

Referring to a recent case in which a Hong Kong firm was swindled out of $25 million  by fraudsters who used AI to impersonate the company’s CFO, Joseph argued that precautionary measures share a theme across the spectrum of threats.

“As the threat becomes more sophisticated, so too must the controls and verification systems,” he remarked.

For businesses, that means ensuring security protocols are followed to the letter, even if instructions appear to originate internally. Technological solutions such as digital identities and multi-factor authentication can also play a role, he added.

Meanwhile, for consumers, nothing beats a healthy dose of skepticism. Or, as Nightingale put it, “everyone should always think twice about everything they see on social media.”

Was this Article helpful? Yes No