Home / News / Technology / Facebook Criticized by Australian Tycoon Andrew Forest Over Deep Fake Crypto Scam
Technology
5 min read

Facebook Criticized by Australian Tycoon Andrew Forest Over Deep Fake Crypto Scam

Last Updated February 4, 2024 10:53 AM
Samantha Dunn
Last Updated February 4, 2024 10:53 AM

Key Takeaways

  • Australian business tycoon Andrew Forest is the latest victim of a deepfake video that circulated on Facebook.
  • Cybertrace uncovered the deepfake video that promoted a fraudulent crypto scheme called QuantumAI
  • Cybersecurity experts have warned investors to be vigilant after identifying deepfakes circulating the internet.

A deep fake video on Facebook featuring Australian mining tycoon Andrew Forrest endorsing a bogus crypto platform called Quantum AI has surfaced, sparking widespread concern.

This incident underscores the escalating sophistication of digital fraud, prompting an urgent call for increased vigilance among internet users.

Sophisticated Scam Alert

On the 31st of January 2024, cybersecurity firm Cybertrace uncovered a deep fake video on Facebook featuring Andrew “Twiggy” Forrest, an Australian mining magnate, falsely promoting a cryptocurrency scheme called Quantum AI.

The fake video showcases an AI-generated Forrest offering daily profits from dubious trading software, leveraging Forrest’s high profile, falsely promising massive daily returns, and misleading viewers into joining the scam. This incident underscores the escalating sophistication of digital fraud, prompting the cybersecurity firm to call for increased vigilance among internet users.

Facebook is flagged by Cybertrace as being notorious for scams. This is also not the first time that Forrest has been the victim of a scam on the platform. In 2022 Forest initiated criminal proceedings  against Facebook alleging the platform has breached money laundering laws after failing to remove his image from scam cryptocurrency adverts.

Following the circulation of DeepFake videos using the image of the Australian business tycoon, Forrest responded  to the bogus video in a statement widely circulated by local media, stating:

“It is reprehensible that Facebook – a company valued at more than $USD 1 trillion – make a deliberate business decision to harm Australians by refusing to spend the software engineering dollars needed to upgrade their systems to detect these AI ads”.

The Prevalence of DeepFake Videos

Deep fake videos have been circulating the internet for some time now. In the last few years, however, they have become much more sophisticated and convincing. Taylor Swift was a recent victim of a deepfake. A sexually explicit DeepFake video of her was released on X before being taken down by the platform. While the account itself was removed, images of the DeepFake video spread across the social media platform, forcing X to take the blunt tool approach of blocking search terms of the singer on the platform.

AI software is used in deepfake videos to portray an almost identical likeness of a person, often so convincing people cannot tell if it is real or AI-generated. In 2021 Elon Musk was the victim of a scam on X, previously Twitter. Deepfakes are much more insidious, however, due to the technology’s ability to convincingly replicate individuals’ identities.

Former Vice President Al Gore spoke about the insidious nature of social media and deepfakes, talking in an interview  about how he believes social media algorithms should be illegal, comparing them to the digital equivalent of “assault rifles”.

What Regulators Are Doing About Deep Fakes

In the US, new legislation that specifically focuses on deepfakes has been proposed. The Preventing Deepfakes of Intimate Images Act  will prohibit the non-consensual disclosure of digitally altered intimate images. The legislation both makes the sharing of these images a criminal offense and creates a right of private action for victims to seek relief—serving as a powerful deterrent.

Rep. Joe Morelle (D-NY) is spearheading this legislation. The Congressman introduced HR 3106 , which aims to criminalize deepfake pornography. Highlighting the emotional impact of such content, Morelle has called on federal action. Francesca Mani, a victim of a deepfake incident, advocated for stronger laws, stressing the power of voice against injustice.

“Try to imagine the horror of receiving intimate images looking exactly like you—or your daughter, or your wife, or your sister—and you can’t prove it’s not. Deepfake pornography is sexual exploitation, it’s abusive, and I’m astounded it is not already a federal crime. My legislation will finally make this dangerous practice illegal and hold perpetrators accountable. I’m grateful we have a generation of young women like Francesca ready to stand up against systemic oppression and stand in their power.”

Social media platforms have faced growing scrutiny over their role in protecting users from AI technology. TikTok recently faced backlash over allowing fake AI accounts on their platform, being called out by an organization ParentsTogether who claimed TikTok was damaging young people’s body image by allowing unlabeled AI accounts.

Was this Article helpful? Yes No