Home / News / Technology / AI / Global AI Regulation in 2025: Proposed Laws and Frameworks 
AI
4 min read

Global AI Regulation in 2025: Proposed Laws and Frameworks 

Published
James Morales
Published

Key Takeaways

  • In 2024, landmark bills in the EU and California set the tone for emerging AI regulations.
  • Several countries are expected to introduce new AI regulations in 2025.
  • From an early focus on applying existing frameworks to the new technology, a growing number of lawmakers now support dedicated legislation.

In 2024, several jurisdictions around the world took the first steps toward regulating AI, with landmark bills in the EU and California setting the tone for a wave of legislation to follow. 

Heading into 2025, both jurisdictions need to iron out the finer details of enactment and enforcement. Meanwhile, governments in Australia, the U.K., Washington, and elsewhere are preparing their own regulatory frameworks for AI.

The EU’s AI Act

Perhaps no piece of legislation is more important and the beginning of a new era of AI regulation than the EU’s AI Act.

Historically, when the EU opens up a new regulatory frontier other countries tend to follow.

Because the new rules will bind any company that sells or deploys AI within a single market, compliance with the AI Act is a requirement for both Big Tech firms and startups.

After the Act entered into force in August, the first key date for implementation will be Feb. 2, when prohibited AI practices must be withdrawn from the market.

These include real-time biometric surveillance tools, systems that subliminally manipulate people’s behavior and AI that makes automated decisions based on protected characteristics like race or gender. 

AI Act codes of practice will be established by May 2. General-purpose AI models must comply with the Act’s requirements by August 2.

AI Regulation in California 

Like the EU, California has often been at the forefront of regulating new industries, passing legislation that other states and countries frequently emulate.

Although Governor Newsom ultimately vetoed SB1047, which would have created AI safety standards to protect against “critical harms,” he did pass a string of other bills covering everything from the use of AI voice replicas to the obligations of social media platforms that host deepfakes. 

Bills that will become effective on Jan. 1 include AB2602, which prohibits the unauthorized use of AI replicas in entertainment; AB1008, which clarifies that AI-generated data containing personal information is subject to data protection regulations; and AB 3030, which introduces requirements for healthcare providers using generative AI.

New Legislation Expected in the U.K.

While the previous U.K. government outlined a vision for AI regulation that would apply an “agile” approach based on existing frameworks, since coming to power in the summer, the Labour government has signaled its intention to introduce new legislation.

The King’s speech in July announced plans  for “appropriate legislation” that may resemble the EU’s AI Act, with specific new rules for developers of “the most powerful models.”

In November, the Secretary of State for Science, Innovation, and Technology Peter Kyle said  the government aims to implement new AI regulations in 2025.

In the meantime, the government is consulting on a proposal to update the country’s copyright laws to account for AI training. 

Australia’s AI Regulation Recommendations

The Australian Senate’s Select Committee  on AI released a report last year recommending comprehensive regulation of the technology to mitigate security threats.

Similar to the U.K., the recommendations suggest a shift away from a piecemeal, sector-specific approach that relies on existing legislation toward a dedicated, economy-wide model more akin to the EU’s AI Act. 

This shift acknowledges that although, in theory, existing frameworks already protect against potential AI harms (for example, data protection violations or workplace discrimination.)

Without dedicated legislation, high-risk practices could still slip through the regulatory net. 

Philippines AI Development Authority

One of the consequences of rising AI regulation is that many jurisdictions are reevaluating the responsibilities of different regulators. 

The EU’s AI Act, for example, creates new regulatory and enforcement powers for the European Commission. Each member state must also designate authorities for market surveillance, either by setting up new ones or expanding the remit of existing agencies. 

Meanwhile, lawmakers in the Philippines are pushing to create an entirely new regulator specifically to oversee AI

Introduced in 2023, the Artificial Intelligence Development and Regulation Act  of the Philippines proposes the creation of an AI Development Authority (AIDA) responsible for implementing the nation’s AI strategy.

With several other AI-related bills making their way through the country’s Senate, such an organization could evolve into a powerful governing body to enforce the emerging rules.

Was this Article helpful? Yes No

James Morales

Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more