Gmail users have been warned to remain vigilant against a new AI-powered scam that uses “super realistic” phone calls and emails to gain full access to accounts.
The scam, experienced and reported by Microsoft solutions consultant Sam Mitrovic, has highlighted how AI-powered tactics are increasingly playing on victims’ emotional psychology.
Mitrovic said the AI scam began with a notification to approve a new Gmail account recovery attempt, which was quickly followed by a “super realistic” phone call.
The extremely convincing call also reportedly came from a number linked to official Google documentation online.
“The scams are getting increasingly sophisticated, more convincing and are deployed at ever larger scale,” Mitrovic wrote in a blog post.
“People are busy, and this scam sounded and looked legitimate enough that I would give them an A for their effort. Many people are likely to fall for it,” he added.
Anti-malware firm Malwarebytes reported that victims of the Gmail scam could also lose access to many other services and even suffer “identity theft.”
“None of the elements used in the attacks are novel, but the combination might make the campaign extremely effective,” the firm wrote.
AI scammers are becoming increasingly adept at turning victims’ emotional responses into vulnerabilities.
In the U.K., AI-powered voice cloning scams grew 30% in 2024, according to NatWest research.
AI-generated scams often mimic urgent communications from trusted sources, like an unexpected warning from Gmail or a distressed family member.
This urgency compels immediate action, which leaves little time for scrutiny and careful decision-making.
Graeme Stewart, head of public sector at Check Point Software, told CCN that people are falling for AI-powered scams because “they are incredibly convincing.”
“AI enables scammers to use real-sounding voices and interactions that no longer feel artificial,” Stewart said. “As a result, people are getting drawn into these conversations, as the responses feel increasingly lifelike—almost indistinguishable from human communication.”
“When it comes to avoiding these scams, the best approach is to treat every unexpected call with suspicion,” he added.
Spencer Starkey, Executive VP of EMEA at cybersecurity provider SonicWall, said everyone is vulnerable to cyberattacks.
“The sheer number of attacks the average customer can experience daily forces organizations of all sizes to automate detection solutions, to identify and halt any attack before it enters the system,” he added.
In May, the FBI warned that AI scams against individuals and businesses were escalating to a higher level of sophistication.
AI significantly lowers the barriers to entry for cybercriminals through the automation of creation and distribution.
Instead of manually crafting each message, scammers can generate thousands of highly personalized scam messages in a fraction of the time.
FBI Special Agent in Charge Robert Tripp said: “As technology continues to evolve, so do cybercriminals’ tactics.
“Attackers are leveraging AI to craft highly convincing voice or video messages and emails to enable fraud schemes against individuals and businesses alike.”
“These sophisticated tactics can result in devastating financial losses, reputational damage, and compromise of sensitive data.”