Key Takeaways
Following the first event in the UK last year, the next AI Summit will take place in Seoul on the 21st and 22nd of May. Co-hosted by the governments of the UK and South Korea, the event will build on the legacy of the Bletchley Park Summit, which sketched the beginnings of an international framework for AI regulation.
Attended by political leaders from around the world, the event series maps an emerging consensus on some issues. But it also highlights areas where the two host countries are at odds with some of their neighbors.
Following the first AI Summit, participant governments agreed to the Bletchley Declaration committing them to identifying AI safety risks and collaborating on policy.
While that might sound as if everyone was on board with regulating the sector, neither the UK nor South Korea are in any rush to do so.
A recent Blueprint for Mutual Prosperity through AI Governance coauthored by Microsoft and and the Korean Ministry of Science and ICT was full of phrases like “self-regulation” and “private sector-led.”
The UK government has also argued in favor of a light-touch, as outlined in its 2023 policy paper , A Pro-Innovation Approach to AI Regulation.
Both countries have strong incentives to pursue laissez-faire AI regulation. Namely, large technology sectors, and the opportunity to compete with neighboring economies that have already pushed ahead with stricter rules.
In the UK, the government’s preference for hands-off AI regulation is consistent with its post-Brexit emphasis on cutting red tape to give the country a competitive edge.
Across the channel in the EU, where the AI Act was adopted in March, the Center for Data Innovation has forecast the cost of compliance rising to as much as €34 billion a year by 2030.
In a similar vein, South Korea could potentially benefit from undercutting China. One of the first countries to regulate the AI sector, the regulatory burden there is also high. For example, the autonomous driving startup PerceptIn needed a compliance budget of $25,000 a month to launch its autonomous micro-mobility project.
In early 2023, the Korean government introduced its most comprehensive piece of AI legislation to date. However, opponents of the bill criticized it for focusing too much on fostering industry without enough emphasis on civil rights.
Yet again, an important parallel can be drawn with the UK, where trade unions have called on the government to do more to protect workers from AI risks.
Of course, no one wants to be anti-innovation and no one wants to stand against civil rights. But when it comes to AI regulation, the policies that are condemned by trade unions and civil society organizations are the very same ones that won Microsoft’s glowing praise.
In the end, debates over whether to pursue heavy-handed or light-touch regulation reflect the political mood of a nation. Ahead of the Seoul AI Summit, the UK and South Korea look to be broadly aligned on the issue of regulation. But South Korea’s recent parliamentary elections swung the balance of power in favor of the opposition and a similar result is expected when Brits vote later this year. With left-wing political parties in ascension in both countries, their legislative direction could soon change.