Key Takeaways
Passing new laws can take years, sometimes even decades. But facing the rapid acceleration of AI development and use, lawmakers worldwide have been spurred to action. Joining governments in America, China, and Europe, Japan is the latest country to push for AI regulation. As reported on Thursday, February 15, Japan’s ruling Liberal Democrat party wants to introduce the new law before the end of the year.
The government has yet to disclose any details of its plans. But the pace of legislation points to a fast-emerging global system of rules and regulations. For AI developers, mapping its landscape will be critical going forward.
So far, AI regulation has faced differing global approaches.
China has come down hard on foreign platforms, banning ChatGPT while encouraging home-grown alternatives that it can more easily control.
At the other end of the scale, the UK and the US have preferred a more laissez-faire approach, mostly letting courts and regulators decide how to apply existing rules in the context of AI, without imposing any sweeping restrictions on the industry.
Sitting somewhere in the middle, the EU’s AI Act targets the most dangerous applications of the technology, establishing guardrails to prevent the abuse of facial recognition and other potentially invasive technologies.
Last year, it was reported that Japanese officials were leaning toward a more relaxed approach in line with the US. However, a better comparison might be the UK.
Like the UK, high-tech sectors play an outsized role in the Japanese economy, which contracted by 0.4% in the last three months of 2023, the latest data shows.
With the UK also facing a potential recession, the 2 countries face many of the same economic challenges. Regarding AI policy, these similarities could shape how and to what end they choose to regulate the technology.
Consider, for example, the title of a recent UK government consultation : “A pro-innovation approach to AI regulation.”
The consultation focused on AI’s potential to boost economic growth. It also considered important questions of AI safety, but the government has shown no appetite for the type of restrictions imposed by the EU’s AI Act.
One key dividing line can be observed in the different approaches to privacy taken on each side of the English Channel.
A centerpiece of the EU’s Act is the concept of the “high-risk AI system” defined as one that poses “a high risk of harm to the health and safety or the fundamental rights of persons”. Systems that are deemed high-risk include those used for surveillance, which pose a special threat to migrants and other groups that are already over-surveilled.
While privacy advocates have argued that the restrictions don’t go far enough, the AI Act nonetheless bans the use of “remote biometric monitoring” (RBM) except by law enforcement for the prevention and investigation of serious crimes.
In contrast, the use of AI surveillance in the UK has exploded since 2022, with the government actively encouraging police forces to increase their use of the very technologies that the EU is attempting to curb.