Home / News / Technology / Security / Google Revises AI Ethics, No Longer Rules Out Military and Surveillance Use 
Security
4 min read

Google Revises AI Ethics, No Longer Rules Out Military and Surveillance Use 

Published
James Morales
Published

Key Takeaways

  • Google has updated the “AI Principles” that guide its development of artificial intelligence.
  • The firm has quietly dropped a commitment not to develop AI for weapons.
  • Instead, Google increasingly emphasizes the need to develop AI for national security.

In a recent update to its “AI Principles ,” Google has watered down language meant to prevent its tech from being used to cause harm.

The changes are part of a wider repositioning on the topic of AI safety that has seen Google quietly legitimize the use of AI for “national security,” paving the way for previously off-limits use cases including weapons and surveillance systems.

Defining AI Harm

First codified in 2023, Google’s AI Principles describe the firm’s approach to the responsible development of artificial intelligence and outline how it intends to prevent harm.

Until the latest update, the framework listed four specific applications the company wouldn’t pursue:

  1. Technologies that cause or are likely to cause overall harm.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Surveillance technologies that violate internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

In the updated version, however, almost all references to potentially harmful AI applications are gone. In their place, a single line commits Google to “mitigate unintended or harmful outcomes and avoid unfair bias.”

AI For National Security

While mentions of harm prevention are conspicuously absent from Google’s latest AI Principles, references to national security are increasingly common in the company’s literature.

Discussing the updated principles, Google Deepmind CEO Demis Hassabis and Senior Vice President for Research James Manyika embraced the current political mood of America.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” they argued . “We believe that companies, governments, and organizations […] should work together to create AI that […] supports national security,” they added.

Alphabet’s President of Global Affairs Kent Walker put things even more bluntly in a recent blog post arguing that “to protect national security, America must secure the digital high ground.”

Adopting a radically different tone from the Google of the past, Walker called for the government, “including the military and intelligence community,” to take a leading role in the procurement and deployment of AI.

Google and the Military

Google’s latest bid to cozy up to the American defense establishment marks a dramatic turnaround for the company.

Back in 2018, thousands of employees signed a letter expressing outrage at plans to develop image recognition technology for the Pentagon.

Despite assurances from the firm’s senior management that the technology wouldn’t be used to operate drones or launch weapons, Googlers widely rejected the program on moral grounds.

“We believe that Google should not be in the business of war,” the letter stated. “Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable.”

The 2018 employee revolt was one of the factors that inspired Google to create its AI Principles in the first place. With a strong commitment not to weaponize the technology, the company signed a modified contract with the Pentagon three years later.

Google’s latest wave of militarization occurs in a markedly different political climate. Between the threat of layoffs, the rise of MAGA, and Silicon Valley’s broader shift to the right, the employee activism of yesteryear has largely subsided.

Without that barrier, the defense and law enforcement sectors could provide many lucrative opportunities for Google’s AI business.

Was this Article helpful? Yes No

James Morales

Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more