Home / News / Technology / AI / AI Act Goes Live: EU Bans High-Risk AI, Fines up to €35M
AI
4 min read

AI Act Goes Live: EU Bans High-Risk AI, Fines up to €35M

Published
Kurt Robson
Published
By Kurt Robson
Edited by Samantha Dunn
Key Takeaways
  • The first compliance deadline for the EU’s AI Act has now passed, allowing regulators to ban AI systems with the highest risk to society.
  • Companies found to be using these AI systems could face a fine of up to €35 million ($36 million).
  • The news comes as a newly published international AI safety report warned of General-purpose AI’s rising threat.

Regulators in the European Union can now ban AI systems they deem to be an “unacceptable risk” to society and impose hefty fines for continued use.

The new rule, implemented on Sunday, Feb. 2, marks the first compliance deadline for the EU’s AI Act—which came into force in August 2024.

Companies and developers across the EU are racing to comply with a series of staggered deadlines for the bloc’s sweeping AI framework, most of which will be applicable by mid-2026.

AI Act Begins

The rule that came into power on Sunday concerns AI systems found to be at the EU’s highest level of risk.

The AI Act looks at AI systems with four levels of threat: minimal risk, limited risk, high risk, and unacceptable risk.

Developers or businesses found to be using AI applications that fall under the threat level could be fined up to €35 million.

The EU AI Act states that companies could be fined up to 7% of their annual revenue from the previous year.

AI systems under the unacceptable risk level includes :

  • Untargeted scraping of internet or CCTV for facial images to build up or expand databases.
  • Emotion recognition in the workplace and educational institutions, unless for medical or safety reasons (i.e. monitoring the tiredness levels of a pilot).
  • Individual predictive policing based solely on profiling people.

As companies and developers continue to ensure compliance with the rules, much remains to be determined about how the AI Act will be enforced.

EU lawmakers have promised to release additional guidelines on the rules.

General-Purpose AI is Growing

In January, ahead of the AI Action Summit in France, an International AI Safety Report  was released. The report aimed to establish the “first comprehensive, shared scientific understanding of advanced AI systems and their risks.”

The report, which brought together insights from 100 independent international experts, warned against the growing threat of general-purpose AI

General-purpose AI refers to a form of technology that is more akin to human intelligence.

Unlike narrow AI, which is designed for specific tasks, general-purpose AI can more adaptably understand, learn, and apply knowledge.

“A few years ago, the best large language models could rarely produce a coherent paragraph of text,” the report stated.

“As general-purpose AI becomes more capable, evidence of additional risks is gradually emerging,” it added. “These include risks such as large-scale labor market impacts, AI-enabled hacking or biological attacks, and society losing control over general-purpose AI.”

The report notes that some experts believe these risks are decades away, while some think they could lead to societal harm within the next few years.

EU vs. US

The AI Act marks the world’s first comprehensive rulebook on how AI can be used.

Henna Virkkunen, executive vice president of the European Commission for Technological Sovereignty, said the bill would “protect our citizens.”

Just days into his inauguration, President Donald Trump revoked Executive Order 14110, which former President Biden had signed in October 2023 to address the risks associated with AI.

OpenUK CEO Amanda Brock told CCN that Trump has effectively eliminated the need for AI models to undergo checking before they are released.

“Supporters will argue that this move will help speed up the innovation process and keep the U.S. at the forefront of the AI market,” Brock said. “For those against this move, it is one that puts technology innovation and potential profit ahead of personal privacy or security of data.”

Brock, who is set to host the State of Open Con in London on Feb. 4, said the new U.S. Government wants to move faster around AI, but it doesn’t mean the AI community has to sacrifice safety and privacy requirements.

“Software communities can take the lead around keeping that mindset in place around safety, security, and privacy through collaborating with each other,” Brock said. “This makes it easier for everyone to benefit.”

Was this Article helpful? Yes No

Kurt Robson

Kurt Robson is a London-based reporter at CCN with a diverse background across several prominent news outlets. Having transitioned into the world of technology journalism several years ago, Kurt has developed a keen fascination with all things AI. Kurt’s reporting blends a passion for innovation with a commitment to delivering insightful, accurate and engaging stories on the cutting edge of technology.
See more