Home / News / Technology / AI / Fighting Deepfakes Entirely With AI ‘Not Feasible’ for Businesses—ThreatLocker CEO Danny Jenkins
AI
5 min read

Fighting Deepfakes Entirely With AI ‘Not Feasible’ for Businesses—ThreatLocker CEO Danny Jenkins

Published
Kurt Robson
Published
By Kurt Robson
Edited by Samantha Dunn
Key Takeaways
  • Danny Jenkins says that businesses need to deploy various techniques to combat the growing threat of AI deepfake.
  • Jenkins believes that the financial services, healthcare and gambling sectors are most at risk from AI threats.
  • In 2024, the National Cyber Security Centre warned that AI would heighten the impact of cyberattacks over the next two years.

AI-powered tools are now being used for everything from speeding up the process of generating phishing emails to creating deepfakes that convince victims they are in a relationship with a celebrity.

Danny Jenkins, the CEO of cybersecurity firm ThreatLocker, believes that although businesses can utilize AI tools to fight back against the rising threat, this could create ethical concerns.

With a background as an ethical hacker, Jenkins understands the importance of businesses using various methods to protect their customers against sophisticated AI.

AI Threats Rising

Jenkins believes that AI will continue to grow in sophistication in 2025, reaching the point where it can easily create undetectable malware and highly convincing phishing emails.

“AI can now simulate personal communication styles, making targeted phishing even more dangerous,” Jenkins told CCN. “This evolution of AI-powered attacks will increase the complexity and impact of cyber threats.”

This prediction is backed up by the National Cyber Security Centre, which warned last year  that AI would increase the volume and heighten the impact of cyberattacks over the next two years.

Jenkins predicts that the financial services, healthcare and gambling sectors are most at risk due to their high profitability and the sensitive nature of the data they handle.

“Healthcare, in particular, faces challenges due to underfunded IT security, while manufacturing industries often fail to prioritize cybersecurity, making them attractive targets,” Jenkins added.

AI Deepfakes Are Growing

Jenkins, who is seen as an authority in securing networks, notes that AI-generated deepfakes, particularly in voice phishing, pose a growing threat to businesses and individuals.

In September 2023, hospitality company MGM Resorts International experienced a significant cyberattack  that disrupted its operations and compromised sensitive data.

The attackers utilized AI-powered voice phishing to impersonate MGM employees. After gathering publicly available information from platforms like LinkedIn, they contacted MGM’s IT help desk, posing as legitimate staff members.

This manipulation led to the attackers obtaining credentials that granted them unauthorized access to MGM’s network.

“While AI deepfakes may not become widespread due to the effort required to pull them off, they could still have a significant impact on large companies,” Jenkins said.

Jenkins explained that businesses must implement double-checking procedures to protect themselves from the technology’s growing sophistication.

Voice verification systems or multifactor authentication could help businesses avoid falling victim to this worrying form of social engineering.

AI vs. AI Deepfakes

In order to combat AI threats, many businesses utilize AI tools themselves to fight back. AI-powered tools can analyze vast amounts of data in real time, detecting patterns and anomalies indicative of cyberattacks, including those involving AI.

However, Jenkins believes the double-edged sword for both attackers and defenders could begin to create ethical concerns.

“While attackers use AI for generative purposes, such as creating malware or phishing content, defenders may face challenges using AI without compromising privacy or security,” Jenkins said.

The deployment of AI often requires access to large volumes of sensitive data, such as personal information, network activity and user behavior patterns.

This could create tension for businesses, as they must balance the need for comprehensive data analysis with the obligation to respect customers’ privacy rights.

“A major ethical issue will be ensuring that cybersecurity vendors do not share customer data with AI engines that could inadvertently learn and misuse that information,” Jenkins added.

AI-powered Defenses

Despite their ethical challenges, AI-powered defenses can work well in combating AI-led attacks and may be needed as the threats grow larger.

However, these defenses are costly and require masses of computing power.

“Technologies are emerging that can detect AI-generated content, but it’s still not feasible to implement these in real-time for all communications,” Jenkins said.

Instead, Jenkins believes that companies should utilize manual offline checks first and foremost. For example, if someone calls claiming to be an employee, companies should call back on a known number to verify their identity.

“Companies should focus on manual checks and strengthening authentication procedures, particularly for sensitive actions like financial transactions,’ Jenkins added.

Zero Trust Is Key

Jenkins believes businesses must operate with a ZeroTrust solution at all times.

“Businesses are realizing that traditional detection methods are no longer sufficient to counter increasingly sophisticated threats,” Jenkins explained.

“In response, more companies are adopting stricter security controls, such as two-factor authentication (2FA) and the zero-trust approach, which restricts application access and ensures that only verified users and software are allowed to operate within their systems,” he added.

Zero-trust assumes that threats can originate from anywhere—inside or outside an organization’s network—and therefore eliminates implicit trust in any user, device or application attempting to access resources.

Instead, it enforces strict access controls, continuous authentication and monitoring to protect sensitive systems. “The shift to more stringent controls is becoming essential in the face of evolving threats,” Jenkins said.

Was this Article helpful? Yes No

Kurt Robson

Kurt Robson is a London-based reporter at CCN with a diverse background across several prominent news outlets. Having transitioned into the world of technology journalism several years ago, Kurt has developed a keen fascination with all things AI. Kurt’s reporting blends a passion for innovation with a commitment to delivering insightful, accurate and engaging stories on the cutting edge of technology.
See more