Home / News / Technology / Amazon, Anthropic, Meta Pledge to Deactivate AI Tech If This Happens
Technology
4 min read

Amazon, Anthropic, Meta Pledge to Deactivate AI Tech If This Happens

Published
Samantha Dunn
Published

Key Takeaways

  • The AI Seoul Summit fosters international cooperation, with global leaders coming together to discuss important AI issues.
  • Big Tech companies have signed an agreement to establish thresholds for AI-related risks.
  • The new commitments are designed to be built upon and suggest ongoing global collaboration.

Sixteen major tech companies have pledged to deactivate their AI technologies if certain high-risk thresholds are met. This landmark agreement was announced at the commencement of the AI Seoul Summit, a two-day event co-hosted by South Korea and the UK.

The voluntary commitments, described as historic by the UK government, include leading names in the tech industry such as OpenAI, Mistral, Amazon, Anthropic, Google, Meta, Microsoft, and IBM.

 A World First Agreement

At the AI Seoul Summit opening day, companies from the US, China, Europe, and the Middle East agreed to a set of comprehensive safety outcomes.

The agreement  outlines a framework for identifying potential risks associated with AI, establishing thresholds for those risks, and ensuring transparency in the processes. The signatories have committed to pausing or deactivating AI models or systems if the identified risks cannot be sufficiently mitigated. The safety frameworks will specify conditions under which severe risks would be “deemed intolerable.”

The exact conditions that would require the deactivation of AI technologies were not specified in any public release.

“This is a world first,” said UK Prime Minister Rishi Sunak. “Having so many leading AI companies from across the globe agree on the same commitments to AI safety is unprecedented.”

Global Collaboration for AI Safety

The AI Seoul Summit follows last year’s AI safety summit held at Bletchley Park in the UK. That summit concluded with a declaration by nearly 30 countries to ensure AI development is “human-centric, trustworthy, and responsible.” The current summit aims to build on these commitments with further agreements from global leaders.

Prime Minister, Rishi Sunak, said AI is a hugely exciting technology – and the UK has led global efforts to deal with its potential, hosting the world’s first AI Safety Summit last year. But to get the upside we must ensure it’s safe.”

French President Emmanuel Macron announced that France would host the next in-person AI safety summit, indicating ongoing international collaboration on this critical issue.

Mark Brakel, director of policy at the Future of Life Institute, expressed cautious optimism about the new commitments. “While it’s encouraging to see these companies take responsibility, it is crucial that the necessary guardrails, standards, and oversight are codified into law,” Brakel said. “Goodwill alone is not sufficient.”

OpenAI recently unveiled its new model, GPT-4o, boasting faster capabilities, while Google introduced a new AI assistant being developed under the banner of “Project Astra” and a suite of AI-powered upcoming Android features at its 2024 developer conference.

OpenAI Safety Update

OpenAI has issued an update as part of the AI Seoul Summit, reinforcing its commitment to AI safety. OpenAI shared its approach to integrating safety measures in a blog post . “We are proud to build and release models that are industry-leading on both capabilities and safety,” OpenAI stated.

In the blog post, OpenAI included ten key safety practices it is actively using and improving upon.

Challenges Ahead

Regulatory efforts are also gaining momentum. The EU’s AI Act, set to come into effect in June 2024, will be the first comprehensive legislation of its kind. The UN General Assembly has also adopted a resolution promoting “safe, secure, and trustworthy” AI.

Despite the voluntary commitments from global leaders, concerns about AI safety remain. Recent resignations from OpenAI’s team dedicated to preventing AI systems from going rogue highlighted internal conflicts about prioritizing safety over product development. Jan Leike, one of the team’s former leaders, emphasized the need for a “safety-first AI company.”

Was this Article helpful? Yes No