Home / News / Technology / AI Cybersecurity Solutions: 71% of Central Banks Use GenAI to Detect Cyberattacks
Technology
3 min read

AI Cybersecurity Solutions: 71% of Central Banks Use GenAI to Detect Cyberattacks

Published May 28, 2024 3:26 PM
James Morales
Published May 28, 2024 3:26 PM

Key Takeaways

  • A recent BIS survey found that 71% of central banks use generative AI tools for cybersecurity.
  • AI can help central banks identify and respond to cyber threats faster.
  • However, it also introduces risks.

In recent years, Artificial Intelligence (AI) tools have become an essential weapon in the fight against cyber threats. From consumer anti-virus software to sophisticated, large-scale enterprise solutions, AI is increasingly central to efforts to detect and respond to cyberattacks. 

As the field evolves, even central banks are getting in on the action. According to a recent report  by the Bank for International Settlements (BIS), 71% of central banks are now utilizing generative artificial intelligence (GenAI) in their cybersecurity stacks.

Risks and Rewards of Central Bank AI Adoption

The BIS report highlights that over two-thirds of surveyed central banks have integrated GenAI into their cybersecurity strategies, helping them automate routine tasks and identify anomalies and potential security breaches faster and more accurately than traditional methods alone.

However, adoption comes with challenges.

Over half of the surveyed central banks report that their strategies regarding the evaluation and adoption of AI are currently under development. 

Moreover, the report observes that institutions must balance the benefits of emerging AI solutions with the risks. 

“In terms of risks, gen AI can introduce new vulnerabilities into central banks’ cyber security defenses,” it notes. The report added that “risks related to social engineering and zero-day attacks as well as unauthorized data disclosure are of highest concern.”

Human Oversight Still Needed

The BIS emphasized the challenges central banks face in providing the necessary resources to implement new AI systems.

Central banks reported that there are insufficient technological skills among staff and that they need to invest in additional human resources and training. This aligns with a general shortage in cybersecurity experts that plagues many government agencies.

What’s more, despite AI’s effectiveness in handling operational tasks, it still requires human supervision to ensure ethical and accurate outcomes.

Compliance and Privacy Concerns

Given the sensitive and security-critical nature of their work, central banks face significant privacy and compliance challenges as they integrate AI cybersecurity tools.

Central banks handle vast amounts of sensitive financial data, and the deployment of AI systems raises concerns about how this data is collected, processed, and stored.

For instance, respondents to the BIS report flagged the risk of data leakage that is inherent to cloud services.

To overcome the problem,  central banks that have enabled or plan to enable their staff to access cloud-based AI services have placed restrictions on employees’ usage.

 

Was this Article helpful? Yes No