Home / News / Technology / US and UK Announce AI Safety Partnership – Here’s How The Nations’ Approach to AI Differs
4 min read

US and UK Announce AI Safety Partnership – Here’s How The Nations’ Approach to AI Differs

Published April 3, 2024 4:10 PM
James Morales
Published April 3, 2024 4:10 PM

Key Takeaways

  • The US and the UK have signed a memorandum of understanding on AI safety.
  • While broadly aligned on the issue, the 2 countries have still taken different approaches to AI regulation.
  • Nevertheless, they are united in their shared commitment to supporting the industry’s growth.

Following on from November’s AI safety summit in Bletchley Park, the US commerce secretary, Gina Raimondo, and British technology secretary, Michelle Donelan, have committed  to the 2 countries working together on AI model safety tests.

While broadly aligned on issues of AI safety, in other respects, the UK and the US have diverged in their approach to AI regulation.

The UK’s Agile AI Regulation

In the UK’s regulatory tradition, lawmakers typically give regulators ample discretion when it comes to setting rules for the industries they oversee.

In the field of AI, as elsewhere, the focus of legislation is on defining the responsibilities of government agencies and the rights of individuals and businesses that operate within their frameworks. 

For AI regulation, that means the UK hasn’t introduced dedicated legislation like the EU’s AI Act or the Biden Administration’s executive order  on AI safety. Instead, Westminster has taken a more piecemeal approach, including provisions that affect the AI industry in legislation such as the Online Safety Act, with responsibility for oversight distributed across different regulators.

However, the sector has now grown large enough that it may warrant its own dedicated regulatory authority, as proposed in the Artificial Intelligence Regulation Bill  last month. 

Washington Moves to Regulate AI

While the White House has made some moves toward Federal AI regulation, Congressional interventions have been highly specific and targeted. 

For example, the Federal Artificial Intelligence Risk Management Act (2023) requires all government agencies to adhere to the AI risk framework developed by the National Institute of Standards and Technology (NIST). 

When it comes to industry regulation, individual states have taken the lead so far. 

California Emphasizes AI Safety

Given the preponderance of AI developers in the Bay Area, California is perhaps the most important state to shape the emerging landscape of AI regulation.

In February, California State Senator Scott Weiner introduced a bill seeking to establish safety standards for the AI industry. The bill would put “appropriate guardrails around the development of the biggest, most high-impact AI systems,” Weiner said in a statement .

While efforts to regulate AI development are taking place at multiple levels of government, any kind of global framework will inevitably take time

Toward a Global Framework

As evidenced by the recent Memorandum of Understanding, the UK and the US are attempting to shape the conversation on AI regulation globally. 

While both governments are nominally concerned about AI safety, each country hosts sizeable technology sectors that could be hurt if the global regulatory environment starts to cut into their profits. 

Noah Green, an AI expert at the Center for a New American Security (CNAS), recently observed  that the UK’s approach is more business-friendly than that taken by the EU 

Through the partnership, both countries will collaborate on new test exercises and are considering exploring personnel exchanges between their respective AI safety institutes. 

“The UK’s framework is less rigid, allowing for firms to innovate more quickly,” Greene noted. In contrast, “the EU AI Act, [has] created an overly broad regulation that acts as a gut punch to AI software,” he argued.

Was this Article helpful? Yes No