Home / Analysis / First-Ever AI Regulation: EU’s AI Act Pros and Cons
10 min read

First-Ever AI Regulation: EU’s AI Act Pros and Cons

Published
Giuseppe Ciccomascolo
Published

  Key Takeaways

  • EU lawmakers approved the first comprehensive regulation of artificial intelligence in the world.
  • The law sets a global standard for ethical and responsible AI development.
  • But it has been criticized for being overly prescriptive and burdensome, potentially stifling innovation in the AI sector.
  • Here are pros and cons of the first-ever regulation in the AI world.

Artificial intelligence (AI) has emerged as a transformative force, revolutionizing industries and enhancing our daily lives. However, as AI’s capabilities grow, so do concerns about its potential risks, from exacerbating societal biases to undermining human autonomy.

In response to these concerns, the European Union (EU) has taken the lead in establishing the world’s first comprehensive AI regulation, the Artificial Intelligence Act  (AIA).

Critics And Praises

The AIA’s ambitious goals and comprehensive approach have garnered both praise and criticism. Supporters applaud the EU’s proactive stance in addressing AI’s challenges and setting a precedent for global regulation. They view the AIA  as a necessary safeguard to ensure AI is used responsibly and ethically, fostering public trust and enabling AI to contribute positively to society.

Critics claim that the AIA’s overly restrictive approach could hinder innovation and limit the potential benefits of AI. They argue the act’s risk classifications are too stringent and the compliance requirements could be overly burdensome for smaller AI developers.

However, proponents of the AIA  maintain its careful risk assessment and proportionality principles strike a balance between fostering innovation and protecting society from potential harm. The AIA’s implementation and its impact on the AI landscape will be closely watched as the EU endeavors to navigate the complexities of regulating this rapidly evolving technology.

The AI Act represents a milestone as the first comprehensive international regulatory framework for artificial intelligence. Notably, it possesses direct applicability across all member states. Drawing parallels with past pivotal legislations like the GDPR, the European privacy law, and anticipating the impact of forthcoming regulations in the approval pipeline such as the Digital Service Act, the Digital Market Act, or the Data Act, experts underscored the incorporation of significant and innovative elements within the AI Act.

Business Oriented Rules

42 Law Firm  contended that the primary merit of the proposed legislation lies in its prioritization of economic considerations, foregrounding the business and economic facets over ethical dimensions – an approach that gains significance as artificial intelligence continues to evolve. This orientation becomes particularly pertinent in light of Sam Altman’s assertion that AI carries the potential risk of humanity’s extinction.

It is noteworthy that Altman, the founder of OpenAI, a prominent Californian startup specializing in the development of Chat GPT, has not only voiced concerns about the existential threat posed by AI but has also joined forces with other CEOs in Silicon Valley to advocate for measures to mitigate such risks. Their joint appeal underscores the gravity of the AI threat, likening it to potential existential risks on par with pandemics and nuclear war.

In view of these concerns, the emphasis on economic aspects within the legislation is favorable. This is particularly true due to the inherent unpredictability of advanced AI models, notably the generative ones like ChatGPT, which are famous for their unpredictability.

In recognizing the unpredictability of these AI systems, the legislation wisely directs attention to the foundational element: the datasets. By emphasizing scrutiny and regulation of the data sources, or ‘datasets,’ feeding into AI models, the law adopts a practical and concrete approach, steering clear of the intricate challenge of predicting the unpredictable trajectory of AI development.

Risk Levels

The second noteworthy aspect of the AI Act, according to 42 Law Firm, lies in its departure from strict prescriptions dictating permissible and impermissible actions. Instead, it adopts a risk-based approach, where the permissibility of AI applications is contingent upon a comprehensive evaluation of associated risks on a case-by-case basis. This method is deemed the appropriate strategy, especially given uncertainty surrounding short-term technological developments.

The regulation categorizes artificial intelligence applications based on the degree of risk involved. Rather than proscribing certain uses outright, the law defines parameters for acceptable risk levels. It has four risk categories. These are

  • Practices subject to prohibition (considered very high risk)
  • High-risk activities
  • Limited-risk scenarios,
  • Activities associated with minimal risk.

Additional parameters, such as environmental risk, are factored into the process. For instance, the legislation deems the application of real-time remote biometric identification systems in public spaces, even retroactively, as a very high-risk activity warranting prohibition.

However, this restriction does not imply a blanket prohibition on the general use of artificial intelligence in conjunction with biometrics. The nuanced risk-based approach ensures a nuanced assessment tailored to specific applications.

Sanctions

The third aspect relates to the sanctions outlined in the legislation. Fines can escalate to a substantial €50 million, underscoring their significant deterrent effect.

EU AI Act Cons

The critical points of the AI Act have not only links to the timescales (the rules will not come into force before the end of 2025 or the beginning of 2026) – with the risk therefore of technical evolution overcoming it – but also to the application complexities.

Complexity For Businesses

Another vital aspect of the Regulation, directly affecting businesses, involves imposing a range of obligations on AI utilization. These obligations encompass precise documentary requirements, controls, and checks applicable to all actors within the supply chain. These are suppliers (those responsible for AI production), distributors (those facilitating the technology’s market distribution), and users, including small and medium-sized enterprises (SMEs).

For instance, a comprehensive impact assessment is mandated, with considerations for privacy, cybersecurity, human rights, and ethical and social impacts. These assessments entail considerable costs, particularly for SMEs seeking to employ generative AI systems for tasks like marketing or even non-generative AI for digitizing company processes. Small businesses are likely to incur both time and financial expenditures, often necessitating the engagement of external professionals.

AI and law analysts underscored a critical perspective. A spokesman for 42 Law Firm said: “While it’s reasonable for a major AI player like OpenAI to comply with extensive rules given its role in AI production, users should experience as simplified a process as possible.”

The second potentially burdensome obligation relates to transparency. In the case of high-risk AI, there is a need for a mandatory declaration of its usage. A spokesperson for 42 Law Firm said: “In the conventional business landscape, declaring AI usage may pose minimal challenges. However, in the realm of services and independent VAT-registered entities, this transparency requirement may encounter resistance.”

In essence, the crux of the matter lies in dealing with a tool that, despite its current existence, defies predictability and engenders complexities warranting an open and extensive debate.

Concerns On Agreements with AI-Developing Countries

The third identified risk centers around the international debate. Sam Altman has expressed disagreement with EU regulations, particularly concerning certain ChatGPT technologies. Beyond the technological nuances, a fundamental concern arises. This inaugural legislation diverges significantly from the approaches taken by the U.S. and China on the same topic. But it raises the potential for creating a divergence in the global business landscape.

Institutions and businesses should not underestimate this aspect. Altman said: “Europe is setting rules for applications of technologies it does not produce.”

The legislation, being the first of its kind, stands in contrast to the approaches of major players in the field, namely the US and China. Altman suggests that this divergence might pose a risk from a business standpoint.

There is a crucial need for alignment in the application of these rules globally, in light of the current technological dominance of the United Statesand China. While the European market holds significant importance, imposing stringent rules may have limitations, especially when the technological solutions are not domestically sourced.

A Potential Downside

Returning to the initial premise of prioritizing business over ethics, experts noted a potential downside. That is the risk of losing business opportunities and favoring economic development in other parts of the world at the expense of the Eurozone.

How does one address this critical issue? 42 Law Firm proposes adherence to international soft law agreements, advocating for the creation of an international regulatory committee for artificial intelligence. Such a committee could be akin to an expanded G7 that also includes China or other relevant countries. This approach represents a balanced compromise between respecting regulatory standards and fostering business activities. The lack of international collaboration raises concerns that strict regulations may impede economic development in Europe. This may also emphasize the necessity for a nuanced approach.

FAQs

What is the EU AI Act?

The Artificial Intelligence Act (AI Act) is a European Union regulation on artificial intelligence in the EU. Proposed by the European Commission on 21 April 2021, it aims to introduce a common regulatory and legal framework for artificial intelligence.

Why does it matter?

The AIA matters because it’s the first time an institution – EU Parliament, in this case – approves a law aimed to regulate the AI sector.

When did EU lawmakers start talking about an AI law?

The EU Parliament first debated an AI law in March 2018. It took more than five years to agree a draft for the law.

Was this Article helpful? Yes No