Home / Analysis / Crypto / Explainers / AI Law: Europe’s Taking Long To Agree: What’s the Holdup?
Explainers
6 min read

AI Law: Europe’s Taking Long To Agree: What’s the Holdup?

Published
Giuseppe Ciccomascolo
Published
Key Takeaways
  • The EU is struggling to agree on a comprehensive AI law.
  • The main areas of disagreement are the scope of the law, the definition of high-risk AI, and the enforcement mechanisms.
  • The delay in passing an EU-wide AI law could give other countries an advantage in developing AI technologies.

In response to the growing concerns about artificial intelligence (AI), the European Union (EU) has been developing a comprehensive set of regulations to govern the development and use of AI. The proposed AI Law aims to establish a framework for ethical and responsible AI development while ensuring that AI is used in a way that benefits society.

However, the AI Law has been the subject of much debate and controversy, with some arguing that it is too restrictive and could stifle innovation, while others believe that it is not strong enough to protect citizens from the risks of AI.

ADVERTISMENT

What’s the EU AI Act?

EU lawmakers pressed pause on negotiations surrounding the European Union’s groundbreaking Artificial Intelligence (AI) Act  on Thursday. Despite almost 24 hours of intense discussions, an agreement remained elusive. The three-way debate involving European Union member states, the European Commission, and the European Parliament resumed on Friday, marking a continuation of the high-stakes deliberations.

Following a prolonged and tense session, legislators, acknowledging the need for a break, opted to extend the debate into a third day to allow for a well-deserved rest.

“We are exhausted. We cannot go on like that. We need to sleep so we can reassess the texts,” said  person present at the talks.

A risk-based approach to regulation
A risk-based approach to regulation

But what they’re debating on? An EU Parliament document revealed that the European Commission is poised to maintain a registry of AI models identified as posing a ‘systemic risk.’ Concurrently, providers of general-purpose AIs are slated to disclose comprehensive summaries outlining the content used in their training processes.

The legislation may potentially offer exemptions for free and open-source AI licenses from regulatory oversight in the majority of cases, with exceptions applying to instances deemed high-risk or associated with prohibited purposes.

However, terms related to the challenges posed by the use of AI in biometric surveillance, as well as the accessibility of source code, remain subject to ongoing discussions, as disclosed by two additional sources familiar with the matter.

Disagreed On Details

For the past two years, EU countries and lawmakers have grappled with finalizing the details of the draft rules initially proposed by the Commission. The fast-paced evolution of technology has added complexity to the negotiations, making consensus elusive.

The forthcoming law holds huge significance, as it has the potential to serve as a template for other nations in their pursuit of AI industry regulations. The law could present an alternative to the U.S.’ light-touch approach and China’s interim rules.

Why Is It Taking So Long?

With EU countries and lawmakers racing against time, the aim is to secure a final agreement for a vote in the spring. This tight schedule aligns with the imperative to conclude the legislative process before Parliamentary elections in June, ensuring an uninterrupted momentum in shaping the future of AI regulations.

Failure to meet this deadline may result in a postponement of the law, jeopardizing the 27-member bloc’s coveted first-mover advantage. Despite the urgency, the practical implementation of any legislation might still take up to two years to materialize.

The many issues at hand present a complex challenge, further compounded by the rapid pace of technological advancement.

While EU lawmakers agreed on the regulations for artificial intelligence systems like ChatGpt, ongoing discussions revolve around the implementation of rules for more intricate technologies such as biometric identification in public spaces, predictive police systems, or emotion recognition software.

The central question remains: should programs posing an ‘unacceptable’ risk to people’s rights be outright prohibited, or should they be subject to stringent regulations?

Artificial intelligence permeates various facets of our lives, including the realm of creativity. Recently, 34 Italian associations of artists and authors addressed the government, urging support for a balanced regulatory framework.

They advocate for regulations that ensure source transparency, fostering the development of artificial intelligence technologies while safeguarding and promoting original human creativity and the rich cultural heritage of our country.

Italy, along with France and Germany, initially opposed more stringent legislation, reflecting the ongoing deliberations surrounding the regulation of AI applications.

ADVERTISMENT

Will EU Have A New AI Law?

In a pivotal juncture, the European Union’s groundbreaking artificial intelligence regulations, proposed in 2019 as the AI Act, face a decisive moment as negotiators strive to finalize the intricate details this week.

Originally anticipated as the world’s first comprehensive AI regulations, signaling the EU’s leadership in tech industry oversight, the process has encountered delays, compounded by a late-stage conflict over governing systems fundamental to general-purpose AI services, such as OpenAI’s ChatGPT and Google’s Bard chatbot.

“To be honest, even if we’ll reach an agreement, I don’t think everyone will be satisfied,” an EU lawmaker told CCN, requesting to keep anonymity.

The deliberations have become a battleground, with major tech firms lobbying against perceived overregulation that could stifle innovation. Simultaneously, European lawmakers are advocating for enhanced safeguards for the advanced AI systems developed by these companies.

Amidst this, global players like the US, UK, China, and alliances like the Group of 7 major democracies are racing to establish guidelines for the swiftly evolving technology.

This urgency is underscored by warnings from researchers and rights groups regarding both the existential threats posed by generative AI to humanity and the potential risks it poses to everyday life.

Was this Article helpful? Yes No
ADVERTISMENT

Giuseppe Ciccomascolo

Giuseppe Ciccomascolo began his career as an investigative journalist in Italy, where he contributed to both local and national newspapers, focusing on various financial sectors. Upon relocating to London, he worked as an analyst for Fitch's CapitalStructure and later as a Senior Reporter for Alliance News. In 2017, Giuseppe transitioned to covering cryptocurrency-related news, producing documentaries and articles on Bitcoin and other emerging digital currencies. He also played a pivotal role in establishing the academy for a cryptocurrency exchange website. Crypto remained his primary area of interest throughout his tenure as a writer for ThirdFloor.
See more