Key Takeaways
On August 1, the first provisions of the EU’s AI Act will enter into force, marking the start of a new chapter for European AI regulation.
But with advancements in AI technology accelerating rapidly, the legislation may already be outdated.
In the tradition of EU regulation that defines the rights and responsibilities of different actors within the multi-country union, the AI Act contains many explicit definitions and criteria. However, its authors also faced the challenge of designing a framework that would remain effective amid rapid technological advancements.
The regulation needs clear definitions that can be applied uniformly by national authorities. However, if those definitions are too specific, they risk becoming obsolete as soon as new technologies emerge. Striking the right balance is key, and the inevitable tradeoffs involved are already creating challenges.
One area where the AI Act has been criticized is in its handling of foundation models.
Initially, the legislation didn’t even mention foundation models, maintaining a strictly risk-based taxonomy of different AI systems based solely on their intended application.
During negotiations over the final text, lawmakers were divided into two camps: those who believed the Act should only cover user interfaces like ChatGPT, and those who argued the underlying models that power them should be regulated.
Even after political agreement on the text, which excluded foundation models from direct regulation, was reached in December 2023, certain quarters of the EU remained dissatisfied.
Faced with the prospect that such objections could derail the Act, the final text adopted in March made it clear that “General Purpose AI Models” (Article 3, 63) fell within the regulation’s scope.
Although those calling for stronger regulatory safeguards welcomed the introduction of language specifically dealing with foundation models, the decision to define a size threshold for regulated models has been criticized.
The AI Act distinguishes the foundation models based on the computing power used to train them, with models that exceed 1025 floating point operations per second (FLOPs) deemed to bring “systemic risk” that requires enhanced oversight.
Commenting on the compute threshold in Euractiv , Dragoş Tudorache, an MEP who acted as one of the European Parliament’s lead AI Act negotiators, warned that the rules could soon become obsolete.
“By the time the rules for foundation models become applicable either there will be four or five big models that will pass this threshold […] or a new leap in technology [will bring down the computational requirements for powerful foundation models],” he said.
Another factor that could make the training threshold obsolete is the emergence of increasingly capable small language models (SLMs). With SLMs fast closing the performance gap with large language models, their comparatively tiny training requirements mean they could slip through the gaps of EU regulation.
Ultimately, the AI Act creates a framework for how the EU will regulate AI. But specific criteria such as the foundation model size threshold could be updated further down the line. Nevertheless, negotiating changes that expand the regulatory perimeter won’t be easy and would likely face strong resistance from countries like France which opposed stricter rules for foundation models in the first place.
With less than two weeks to go before the AI Act enters into force, it remains to be seen how well the EU’s approach will mitigate AI risk.
On the one hand, categorizing models by size may be the best way for regulators to deal with the most powerful AI systems without stymying investment in EU firms or creating unnecessary barriers for smaller developers. On the other, the new provisions could confuse size with risk, and the regulation might have been better off focusing strictly on how the technology is applied.