Home / News / Technology / Australian AI Act in the Works? New Standards for High-Risk AI Systems
Technology
3 min read

Australian AI Act in the Works? New Standards for High-Risk AI Systems

Published September 5, 2024 2:21 PM
James Morales
Published September 5, 2024 2:21 PM

Key Takeaways

  • The Australian government has published a set of AI safety guidelines.
  • The new safety standard provides an important regulatory foundation.
  • However, lawmakers face a choice when it comes to implementing the new rules.

The Australian government took its clearest step yet toward AI regulation on Wednesday, Sept. 5.

In a policy paper, the Department of Industry, Science and Resources proposed  a definition of “high-risk AI” and 10 “mandatory guardrails” that would apply to such high-risk systems.

New Australian AI Rules

With the launch of the “Voluntary AI Safety Standard” the government has introduced a set of transparency and accountability requirements that will apply across the AI sector. 

The new guidelines  reflect a growing international consensus on the need for an AI safety framework that reflects the wide range of use cases the technology can be applied to. 

Firms will be required to identify potential risks before deploying AI systems and monitor them throughout their application. Other requirements relate to appropriate data governance, including measures to protect privacy and ensure security.

Of course, including “mandatory” guidelines within a “voluntary” safety standard is something of a contradiction. However, the policy paper explored ways the government can enforce the new rules.

3 Options for AI Regulation

Having laid down the foundation for AI regulation, the government has outlined three potential paths forward:

  1. Adopting the guardrails within existing regulatory frameworks as needed.
  2. Introduce new legislation to adapt existing regulatory frameworks.
  3. Introducing a new AI-specific law.

The first option mirrors the approach taken by the Biden administration, which has sought to shape AI policy through tools such as the Executive Order  on Safe, Secure, and Trustworthy AI and the Blueprint for an AI Bill of Rights .

While these efforts have established the basic objectives of US regulation, they don’t shift the underlying legal reality or equip federal regulators with new powers.

At the other end of the spectrum, option 3 would take Australia down the same path as the EU, creating a unified “AI Act” to regulate the technology and opening up a whole new realm of AI law. 

Representing the middle path, option 2 would adjust existing legislation to clarify how it applies to the new technology. 

As the report points out, AI is already deployed within legal frameworks related to privacy, financial regulation, consumer protection, and discrimination. However, because many of the relevant laws date back years or even decades, how they apply to AI remains open to interpretation, creating uncertainty for businesses and consumers. 

Australian AI Act Hangs on 2025 Election

With Australians set to elect new representatives in 2025, the ultimate decision on any new AI legislation will likely fall to the next government. 

A parallel can be drawn with the UK, where the Labour government has pursued a more statist regulatory agenda since seizing power from the Conservative party in July.

In Westminster, the political shift makes an EU-style AI Act more likely, with Prime Minister Kier Starmer taking a different stance from his predecessor, who favored a more hands-off approach.

Meanwhile, if Anthony Albanese’s Labor government is ousted in Australia’s next election, the opposite movement might occur, with the opposition more likely to adopt option 1 or 2. 

Ultimately, the three-way model is reductive and obscures a multiplicity of options available to lawmakers. In Australia and the UK, some form of new legislation seems to be the most likely outcome. But its exact scope is still up for debate.

Was this Article helpful? Yes No