Last year, the UK government released a preliminary white paper regarding AI, detailing its plans for implementing a pro-innovation approach to AI regulation. Following a postponement from the originally scheduled late 2023 release the new expected publication date is 2024. This forthcoming response is projected to set out further proposals around AI regulation but is unlikely to trigger any significant regulation.
Meanwhile, the UK’s Intellectual Property Office has faced criticism after failing to agree upon a proposed code that would protect copyright holders from AI.
The UK has so far opted for a light-touch approach to the regulation of AI. In a March 2023 White Paper presented to parliament, the Secretary of State for Science, Innovation, and Technology set the intention of avoiding “ heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI”.
As part of the forthcoming regulatory framework, the UK government has engaged in consultations and called for input from the AI sector to shape its approach to regulating AI activities. It plans to release overarching guidance and a preliminary regulatory framework to industry-specific regulatory bodies in the coming year.
These bodies will subsequently offer bespoke advice for the finance, healthcare, competition, and employment industries. The UK government will determine the need for dedicated AI legislation or a regulatory body. This decision will guide the operational strategies of enterprises deploying AI systems in the UK in 2024.
Reuters reported the UK’s Intellectual Property Office has scrapped its proposed code on protecting copyright holders from AI after failing to reach an agreement with a group of industry executives, including the BBC, the British Library and the Financial Times, and tech companies Microsoft, DeepMind and Stability.
According to the report, the government agency was allegedly unable to agree on a voluntary code of practice after consulting with AI companies and rights holders to produce guidance on text and data mining.
Industry bodies from the music, media, and publishing industries have called for greater protection from AI, amid fears that AI will copy work without compensating the original authors.
So far, the UK’s approach to regulating AI has been cautious, with a focus on gathering feedback from industry bodies rather than driving forth meaningful legislation.
The UK government’s soft take on AI regulation in favor of a pro-innovation approach has raised concerns that regulators in the UK, and indeed, elsewhere, will struggle to keep up with the developing AI industry if they do not move quickly.
A House of Lords committee criticized the government’s lack of action regarding protection rights for creators, with Lady Stowell, the committee’s Conservative chair, remarking:
“The government needs to be clear whether copyright law provides sufficient protections to rights holders because of the introduction of LLMs. If the government is clear that the legislative framework is not adequate then it should update that legislative framework.”
However, many are hopeful that the UK government’s proposed tests by the newly formed AI Safety Institute will spark a pragmatic conversation on AI safety. The specific tests are set to be published in March as part of the wider whitepaper from the government on the regulation of AI in Britain.
In December 2024, the EU became the first major world power to propose comprehensive legislation governing AI. The final text of the AI Act still needs to be formally adopted by the EU but is expected to be finalized in 2024.
Other world powers are also moving to address the need for AI regulation. The International Association of Privacy Professionals unveiled a compilation of AI-related legislation from across the globe in August 2023. Following that, in October 2023, the United Nations introduced an AI advisory board dedicated to forging global consensus on the oversight of AI technologies. This board anticipates issuing its definitive recommendations by the middle of 2024, potentially shaping regulatory actions worldwide.
At the end of 2023, delegates from the EU, US, UK, China, and 25 additional nations endorsed the Bletchley Declaration . This endorsement reflects the consensus among various national and international entities on the critical need for reliable AI and the risks associated with AI models. The declaration advocates for global collaboration and an inclusive international discourse, acknowledging the diversity of regulatory needs and perspectives across different nations.