The white paper details a strategy that primarily depends on utilizing current laws and regulatory bodies, along with “context-specific” guidance, to provide moderate oversight of the rapidly evolving AI industry.
The UK government is channeling £90 million into the establishment of advanced AI research hubs across the nation. These hubs include key fields such as healthcare, chemistry, and mathematics.
This substantial investment highlights the government’s strong dedication to nurturing innovation within these hubs, affirming its ambition to maintain a leading position in the global AI landscape.
Furthermore, the UK government is dedicating £19 million to support 21 specialized projects focused on the creation of AI tools that are both safe and trustworthy.
In recognition of the rapid pace of AI advancements, an additional £10 million is being allocated to enhance the capabilities of regulatory authorities such as Ofcom and the Competition and Markets Authority (CMA). This investment is aimed at equipping these regulatory bodies with the necessary expertise to navigate the complex landscape of AI, addressing both the challenges and the prospects it brings. These regulators are expected to outline their strategies for managing AI-related risks and opportunities by the end of April.
In the response, Technology Secretary Michelle Donelan has highlighted the strengths of the UK’s nimble regulatory approach to AI, noting :
“The challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured.”
Furthermore, she hints at the possibility of implementing “further targeted binding requirements” to address the complexities brought about by “highly capable general-purpose AI systems.”
The aim is to hold the few dominant AI corporations accountable for ensuring their technologies are “sufficiently safe”. The introduction of such binding requirements is not immediate, as it would necessitate the enactment of new legislation.
Donelan added :
“As AI systems advance in capability and societal impact, it is clear that some mandatory measures will ultimately be required across all jurisdictions to address potential AI-related harms, ensure public safety, and let us realise the transformative opportunities that the technology offers. However, acting before we properly understand the risks and appropriate mitigations would harm our ability to benefit from technological progress while leaving us unable to adapt quickly to emerging risks. We are going to take our time to get this right — we will legislate when we are confident that it is the right thing to do.”
Prime Minister Rishi Sunak’s ease in tech circles is unmistakable, whether he’s interviewing Elon Musk on the entrepreneur’s platform, engaging with top US AI executives on their lobbying agendas, or hosting a “global AI safety summit “. This familiarity with tech industry leaders aligns with his government’s preference for a regulatory policy that, for the time being, steers clear of imposing stringent new regulations.
Sunak’s administration exhibits a different kind of urgency in its commitment to invigorating domestic AI innovation through substantial taxpayer investments. The Department for Science, Innovation and Technology (DSIT) implies that these investments will be judiciously allocated to promote “responsible” AI advancement, a term that awaits a clear definition in the absence of a comprehensive legal framework to delineate its parameters.
Tamara Quinn, IP and AI partner, Osborne Clarke told CCN that anyone hoping for fireworks from the government could be left disappointed.
“After nine months cogitating about this game-changing technology, the government’s response could be viewed as underwhelming. Rather than fireworks, the focus in today’s announcement is on existing regulators applying their existing powers to tackle AI. It is unlikely that there would be sufficient time for new legislation in any case, with a general election expected later this year and priority already given to three major digital technology-focused bills (the Digital Markets, Competition and Consumer Bill, the Data Protection and Digital Information Bill, and the Media Bill).”
Quinn pointed out that even in the absence of new legislation the rapid pace and adaptability of technology, especially in the AI sector, pose significant challenges for regulators who must grapple with the demands of new laws.
“As for general purpose AI, such as the systems powering the breathtaking advances in chatbot capability, it is interesting that again, no new law is planned, at least in the near term. The government almost certainly faces an election by the end of the year, and it is interesting to see that the Labour party is planning legislation in this area,” she concluded.
Major technology companies, including Microsoft, Google Deepmind, and Amazon, have voiced their endorsement of the UK government’s approach to AI regulation. These industry powerhouses acknowledge the critical importance of fostering responsible AI development and implementing effective regulatory measures to guarantee the safe and ethical deployment of AI technologies.
While the European Union has recently reached a consensus on the final text of its risk-based framework for regulating “trustworthy” AI, with its comprehensive tech regulations set to be implemented later this year, the UK is charting its own course by opting against immediate legislation on AI. The UK’s approach markedly diverges from the EU’s strategy, emphasizing sector-specific guidelines over prescribed legal frameworks. This divergence accentuates the contrast between the two regions, with the EU advancing its AI law, further distancing its stance from that of the UK.
Alongside its regulatory efforts, the EU is also rolling out its own set of AI support initiatives. As these contrasting approaches unfold—sector-specific guidelines in the UK versus a comprehensive legal framework in the EU—the effectiveness of each in attracting and nurturing growth-driven AI innovation remains an open question.