Home / News / Technology / UK AI Regulators Underfunded, Committee Warns: £10m Government Support “Clearly Insufficient”
Technology
4 min read

UK AI Regulators Underfunded, Committee Warns: £10m Government Support “Clearly Insufficient”

Last Updated May 28, 2024 1:20 PM
James Morales
Last Updated May 28, 2024 1:20 PM

Key Takeaways

  • The House of Commons Science and Technology Committee has published a report on AI regulation in the UK.
  • The Committee warns that regulators don’t currently have the resources needed to meet the challenge.
  • Responsibility for AI regulation in the UK mostly falls on Ofcom, the ICO, the CMA and the FCA.

The House of Commons Science and Technology Committee has published a report  on the governance of artificial intelligence (AI).

One of the concerns raised in the report is that regulators don’t have the necessary resources to take on the challenge and that the £10 million the government has pledged to help support them is “clearly insufficient.” 

Regulators Underfunded 

While the UK doesn’t have a dedicated AI regulator, the responsibility for ensuring AI safety largely falls on communications regulator Ofcom, which has a remit to protect people from online harms. 

Within their respective jurisdictions, the Information Commissioner’s Office (ICO), the Competitions and Markets Authority (CMA), and the Financial Conduct Authority (FCA) also have a role to play.

As part of its initiative to develop “pro-innovation” AI regulation in February, the government committed £10 million  “to jumpstart regulator’s AI capabilities”. But now, MPs have concluded that much more is needed:

“We believe that the announced £10 million to support regulators in responding to the growing prevalence of AI is clearly insufficient to meet the challenge, particularly when compared to even the UK-only revenues of leading AI developers.”

However, funding isn’t the only issue where regulators face difficulties.

Trouble Accessing Unreleased Models

Alongside underfunding, the Science and Technology Committee flagged issues the AI Safety Institute has had accessing foundation models ahead of their public release.

While not a regulator itself, the Institute’s inability to review models before their release suggests AI developers may inhibit effective oversight. 

Although the report acknowledged the value of testing already-available models, it added that “the release of future models without the promised independent assessment would undermine the achievement of the Institute’s mission and its ability to secure public trust.”

Next Steps for AI Regulation in the UK

The Committee’s findings indicate that the existing AI governance structures in the UK are inadequate to cope with the rapid pace of AI development. 

A key focus of the report is the government’s AI White Paper published in March 2023. 

In its initial response  to the White Paper, the Committee called for the government to announce a dedicated AI Bill in the November 2023 King’s. However, that request was rebuffed as the government sought more time to consider its options.

Then, in April 2024, reports surfaced suggesting that legislation was finally in the works. But after Prime Minister Rishi Sunak announced a snap general election last week, it seems unlikely that the government will be able to table anything before then.

Given that the opposition Labour Party is on course to win a significant majority in parliament, the Committee observed that “the next government should stand ready to introduce new AI-specific legislation.”

It also called for the next government to commit to presenting MPs with quarterly reviews of the efficacy of its current approach. These should include: “a summary of technological developments related to its stated criteria for triggering a decision to legislate, and an assessment whether these criteria have been met.”

Was this Article helpful? Yes No