Home / News / Technology / EU AI Act Faces Scrutiny: Experts Warn It Could Be Outdated on Launch Day
Technology
4 min read

EU AI Act Faces Scrutiny: Experts Warn It Could Be Outdated on Launch Day

Published
James Morales
Published

Key Takeaways

  • The EU’s AI Act will enter into force on August 1.
  • Critics have argued that the pace of AI development means certain provisions in the Act could soon be obsolete.
  • European lawmakers remain divided over the decision to classify foundation models according to the amount of training compute required.

On August 1, the first provisions of the EU’s AI Act will enter into force, marking the start of a new chapter for European AI regulation.

But with advancements in AI technology accelerating rapidly, the legislation may already be outdated.

The Challenge of Regulating Fast-Moving Technology

In the tradition of EU regulation that defines the rights and responsibilities of different actors within the multi-country union, the AI Act contains many explicit definitions and criteria. However, its authors also faced the challenge of designing a framework that would remain effective amid rapid technological advancements.

The regulation needs clear definitions that can be applied uniformly by national authorities. However, if those definitions are too specific, they risk becoming obsolete as soon as new technologies emerge. Striking the right balance is key, and the inevitable tradeoffs involved are already creating challenges.

One area where the AI Act has been criticized is in its handling of foundation models. 

Foundation Models in the AI Act

Initially, the legislation didn’t even mention foundation models, maintaining a strictly risk-based taxonomy of different AI systems based solely on their intended application. 

During negotiations over the final text, lawmakers were divided into two camps: those who believed the Act should only cover user interfaces like ChatGPT, and those who argued the underlying models that power them should be regulated. 

Even after political agreement on the text, which excluded foundation models from direct regulation, was reached in December 2023, certain quarters of the EU remained dissatisfied. 

Faced with the prospect that such objections could derail the Act, the final text adopted in March made it clear that “General Purpose AI Models” (Article 3, 63) fell within the regulation’s scope. 

New Rules Risk Criticized

Although those calling for stronger regulatory safeguards welcomed the introduction of language specifically dealing with foundation models, the decision to define a size threshold for regulated models has been criticized. 

The AI Act distinguishes the foundation models based on the computing power used to train them, with models that exceed 1025 floating point operations per second (FLOPs) deemed to bring “systemic risk” that requires enhanced oversight.

largest foundation models by FLOPs
Source: epochai.org

Commenting on the compute threshold in Euractiv , Dragoş Tudorache, an MEP who acted as one of the European Parliament’s lead AI Act negotiators, warned that the rules could soon become obsolete.

“By the time the rules for foundation models become applicable either there will be four or five big models that will pass this threshold […] or a new leap in technology [will bring down the computational requirements for powerful foundation models],” he said.

Model Size and AI Risk

Another factor that could make the training threshold obsolete is the emergence of increasingly capable small language models (SLMs). With SLMs fast closing the performance gap with large language models, their comparatively tiny training requirements mean they could slip through the gaps of EU regulation. 

Ultimately, the AI Act creates a framework for how the EU will regulate AI. But specific criteria such as the foundation model size threshold could be updated further down the line. Nevertheless, negotiating changes that expand the regulatory perimeter won’t be easy and would likely face strong resistance from countries like France which opposed stricter rules for foundation models in the first place.

With less than two weeks to go before the AI Act enters into force, it remains to be seen how well the EU’s approach will mitigate AI risk.

On the one hand, categorizing models by size may be the best way for regulators to deal with the most powerful AI systems without stymying investment in EU firms or creating unnecessary barriers for smaller developers. On the other, the new provisions could confuse size with risk, and the regulation might have been better off focusing strictly on how the technology is applied. 

Was this Article helpful? Yes No

James Morales

Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more