Key Takeaways
In a recent update to its “AI Principles ,” Google has watered down language meant to prevent its tech from being used to cause harm.
The changes are part of a wider repositioning on the topic of AI safety that has seen Google quietly legitimize the use of AI for “national security,” paving the way for previously off-limits use cases including weapons and surveillance systems.
First codified in 2023, Google’s AI Principles describe the firm’s approach to the responsible development of artificial intelligence and outline how it intends to prevent harm.
Until the latest update, the framework listed four specific applications the company wouldn’t pursue:
In the updated version, however, almost all references to potentially harmful AI applications are gone. In their place, a single line commits Google to “mitigate unintended or harmful outcomes and avoid unfair bias.”
While mentions of harm prevention are conspicuously absent from Google’s latest AI Principles, references to national security are increasingly common in the company’s literature.
Discussing the updated principles, Google Deepmind CEO Demis Hassabis and Senior Vice President for Research James Manyika embraced the current political mood of America.
“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” they argued . “We believe that companies, governments, and organizations […] should work together to create AI that […] supports national security,” they added.
Alphabet’s President of Global Affairs Kent Walker put things even more bluntly in a recent blog post arguing that “to protect national security, America must secure the digital high ground.”
Adopting a radically different tone from the Google of the past, Walker called for the government, “including the military and intelligence community,” to take a leading role in the procurement and deployment of AI.
Google’s latest bid to cozy up to the American defense establishment marks a dramatic turnaround for the company.
Back in 2018, thousands of employees signed a letter expressing outrage at plans to develop image recognition technology for the Pentagon.
Despite assurances from the firm’s senior management that the technology wouldn’t be used to operate drones or launch weapons, Googlers widely rejected the program on moral grounds.
“We believe that Google should not be in the business of war,” the letter stated. “Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable.”
The 2018 employee revolt was one of the factors that inspired Google to create its AI Principles in the first place. With a strong commitment not to weaponize the technology, the company signed a modified contract with the Pentagon three years later.
Google’s latest wave of militarization occurs in a markedly different political climate. Between the threat of layoffs, the rise of MAGA, and Silicon Valley’s broader shift to the right, the employee activism of yesteryear has largely subsided.
Without that barrier, the defense and law enforcement sectors could provide many lucrative opportunities for Google’s AI business.