Key Takeaways
The UK, EU, US and Israel signed the first international treaty on AI on Thursday, 6. Sept, in a move that aims to protect human rights.
Meanwhile, the EU’s AI Act is working to find the balance between monitoring the risk of large language models and promoting innovation across the continent.
The emergence of different regulatory frameworks for AI reflects the complexity and varied concerns of the technology.
Under the legally binding agreement, all states have agreed to implement safeguards to prevent threats posed by AI to human rights, democracy, and the rule of law.
The treaty, which focuses on the entire lifecycle of AI systems, was created by the Council of Europe, an international human rights organization.
AI systems must comply with principles that include protecting personal data, preventing discrimination, promoting safe development, and maintaining human dignity.
The treaty aims to “fill any legal gaps that may result from rapid technological advances“.
Governments will likely introduce safeguards to lessen AI-powered misinformation and biased data training for AI systems.
The UK government said that once the treaty is ratified and brought into effect, “existing laws and measures will be enhanced.”
Damien Duff, principal AI consultant at Daemon, told CCN that the treaty “represents a landmark step in the global governance of AI.”
“As the treaty is set to enter into force, it will challenge businesses to scrutinize their AI practices more closely, ensuring they align with these newly established international standards and balance innovation with responsibility, ultimately contributing to a future where AI benefits all of society,“ he added.
The EU AI Act is a comprehensive legal framework for the development, commercialization, and use of AI within the EU.
It was one of the first major legislative efforts worldwide to regulate AI and is part of the EU’s broader strategy to ensure that emerging technologies are safe and ethical.
The rules, which were initially proposed in 2021, aim to safeguard citizens against potential risks associated with AI, while simultaneously promoting innovation across the continent.
The Act categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal or no risk.
There are several key differences between the international treaty and the EU AI Act.
The international AI treaty focuses wholly on AI’s impact on human rights and democracy, while the EU AI Act focuses on a broader risk scale to humans. Non-EU countries are also being urged to sign up to the international AI treaty.
The treaty also covers the use of AI systems for public and private sectors, while the EU AI Act focuses on private large language models and systems.
Katharina Zugel, policy manager at the Forum on Information and Democracy, told CCN: “The EU AI Act offers much more detailed requirements according to the risk category of the AI system and it does apply to the private sector.
“The private sector has been left out of the AI treaty, the Forum thus encourages signatory countries to apply the treaty also to the private sector.”
Zugel said the convention itself will not effectively protect citizens’ human rights.
“It is its implementation, with appropriate governance, funding, resources, oversight from research, CSOs and journalists which can have an impact,” she added.
Despite many AI frameworks emerging worldwide, most are attempting to work in conjunction with one another.
The Council of Europe said the aim of the AI treaty is to “fill any legal gaps that may result from rapid technological advances.” These advances, which have been seen in every facet of AI, sparked a global race between countries and blocs to regulate the emerging tech.
Regional agreements, such as the EU AI Act, focus on specific regulatory environments and are often stricter in scope, while international treaties aim for broader, cross-border consensus.
Each jurisdiction needs to address AI within its own legal context; however, the international AI treaty is hoping to bridge the gap.