Key Takeaways
The European Union is staking its claim as a global artificial intelligence (AI) leader by inviting tech giants and a broad spectrum of stakeholders to shape the continent’s AI future.
With the AI Act aiming to reshape the industry, the drafting of a general-purpose AI Code of Practice will represent a crucial moment for the bloc, even though the EU is the only big institution to have approved an AI regulating law.
The European AI Office has announced a call for expressions of interest to participate in drafting the first general-purpose AI Code of Practice.
It invites eligible general-purpose AI model providers, downstream providers, industry organizations, civil society organizations, rightsholders, academia, and other independent experts to express their interest in contributing to the development of this Code.
This Code will be developed through an iterative drafting process. It is expected to be completed by April 2025, nine months after the AI Act is enacted on Aug 1, 2024. The Code aims to ensure the proper application of the AI Act’s rules for general-purpose AI models.
Simultaneously, the AI Office has launched a multi-stakeholder consultation on trustworthy general-purpose AI models under the AI Act. This consultation allows all stakeholders to voice their opinions on the topics addressed by the Code of Practice.
The Code of Practice will outline the AI Act’s rules for providers of general-purpose AI models, including those with systemic risks. These rules will take effect 12 months after the AI Act’s entry into force, and providers can use the Code of Practice to demonstrate compliance.
After the Code is published, the AI Office and AI Board will evaluate its adequacy and release their findings. The Commission may approve the Code of Practice for general use in the Union. Or, if inadequate, establish common rules for the relevant obligations.
A key aspect of the AI Act is developing Codes of Practice for General-Purpose AI (GPAI) models to ensure practical compliance with the Regulation’s principles. On Jul 8, 2024, several Members of the European Parliament raised concerns about the lack of civil society involvement in drafting these rules. They argued that allowing AI model providers to lead the process might favor industry interests over societal concerns.
Civil society members also warned that large tech companies might dominate the rule-making process, undermining the AI Act’s goals. The European Commission has acknowledged these concerns and plans to include details about stakeholder participation in an upcoming call for expressions of interest. An external firm will lead the drafting process, with the AI Office overseeing and approving the final codes.
Law firm Grimaldi Alliance said : “The coming months will be crucial in determining how the EU navigates stakeholders’ involvement in crafting the AI Act’s Codes of Practice.
“A transparent and inclusive process will be essential for establishing strong, effective, and ethically sound standards for trustworthy AI development across Europe.”
The AI Act is the first comprehensive international regulatory framework for artificial intelligence, directly applicable across all member states. Similar to GDPR, it focuses on business and economic aspects. These are crucial given AI’s potential risks, as noted by Sam Altman of OpenAI.
OpenAI’s CEO, Sam Altman, has criticized the AI Act for diverging from the US and China’s approaches. For him, it may potentially impact global business dynamics. This discrepancy highlights the need for international regulatory alignment to avoid hindering economic development in Europe.
The Act emphasizes regulating datasets feeding AI models, addressing their unpredictability. It adopts a risk-based approach, evaluating AI applications case-by-case rather than imposing strict prohibitions. It also includes significant sanctions, with fines of up to €50 million.
Furthermore, the AI Act also imposes obligations on AI utilization. These include documentation, controls, and impact assessments covering privacy, cybersecurity, human rights, and ethics.
These requirements can be costly for small and medium-sized enterprises (SMEs), and transparency obligations for high-risk AI may face resistance.