Home / News / Technology / Central Banks Weigh AI Risks as Financial Services Adoption Booms
Technology
3 min read

Central Banks Weigh AI Risks as Financial Services Adoption Booms

Published
James Morales
Published

Key Takeaways

  • Adoption of AI tools, especially those powered by foundation models, has soared in the past two years.
  • As the technology gains traction, it is being integrated into ever-more critical data and processes.
  • This has sparked a conversation among central banks about the risks to the financial system.

In the last two years, financial institutions worldwide have vociferously embraced AI, incorporating automated tools to prevent fraud and money laundering, streamline customer service, and offer new features to consumers.

With adoption snowballing, the world’s central banks and financial regulators are moving quickly to assess and mitigate risks posed by the new technology.

Financial Sector AI Adoption

According to research  by the Bank of England, 75% of firms surveyed are already using AI, up from 58% in 2022. Overall, it observed that foundation models account for 17% of all AI use cases—a dramatic change from a few years ago when statistical tools dominated.

In the initial wave of adoption, institutions started with what Bud Financial COO George Dunning referred to in an interview as “low-hanging fruit.”

As he explained, automating low-risk processes that would otherwise require many human hours is an easy way to boost efficiency with AI.

Alongside operations, the financial sector has also adopted an increasingly sophisticated array of customer service bots. Initially, companies were reluctant to integrate financial or personal data, preferring to use AI systems to more efficiently direct and manage queries without exposing sensitive information.

Dunning anticipates the next wave of AI adoption will create more seamless, end-to-end automated experiences for different needs “as people look to effectively and safely use these tools on other data sources, such as transaction and investment data.”

However, this rising adoption hasn’t gone unnoticed by financial authorities.

Regulators Take Note

In May, the European Central Bank (ECB) published a report  on the use of AI in the financial sector that highlighted some of the risks regulators are most concerned about.

Priority concerns for the ECB include cybersecurity risks, market concentration, and the growing reliance on a handful of AI providers that are fast becoming “too big to fail.”

Echoing the same sentiment, Federal Reserve Governor Michelle W. Bowman recently discussed  AI adoption in banking, weighing in on whether the new technology requires new regulations to mitigate financial stability risks.

“While AI may be on the frontier of technology, it does not operate outside the existing legal and regulatory framework,” she noted.

She observed that banks are already subject to strict rules governing their use of data, cloud services, and third-party software.

Concentration Risk

One potential AI risk highlighted by the ECB and the Bank of England is the industry’s reliance on technology from a few foundation model developers.

In his experience, Dunning said Anthropic’s core models seemed to be emerging as the “go-to” for the financial sector. He said that “a handful of key players” will probably emerge as the dominant providers.

As long as AI companies continue to deliver secure, reliable technology, this isn’t a problem.

However, as the ECB report noted, overreliance on a limited number of suppliers may make the “operational backbone” of the financial system more vulnerable to failures or cyberattacks affecting one of the big providers.

Was this Article helpful? Yes No

James Morales

Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more