Key Takeaways
In a recent blog post , Sam Altman predicted that as AGI approaches, AI developers will have to make some unpopular decisions in the name of safety.
Generally, the OpenAI CEO said he is in favor of letting individuals use AGI as they like. However, he acknowledged risks including the threat from “authoritarian governments” that will require some tradeoffs between individual empowerment and safety.
Altman’s reference to AGI surveillance reflects growing concerns about the use of AI to monitor, track, and record individuals’ behavior across multiple domains.
The archetypal AI surveillance technology is facial recognition, which has unsurprisingly become a favorite tool of the kind of autocratic governments Altman alluded to in his blog post.
For instance, having aggressively deployed AI-powered facial recognition to support its own surveillance state, China has now become a major exporter of the technology.
Moreover, while Western firms developing the technology are restricted from selling it to the most authoritarian governments, there are few such limitations for the Chinese AI developers behind some of the most powerful facial recognition systems on the market.
An analysis of Chinese technology exports in 2024 observed a strong “autocratic bias” in facial recognition. In other words, Chinese companies generate far more business from other autocratic regimes than they do from liberal democracies.
While facial recognition typically analyzes CCTV footage, other AI surveillance tools monitor social media, digital communications, internet traffic, and financial transactions.
Connecting the dots between various online and offline activities has traditionally been a challenge for law enforcement and intelligence agencies tasked with monitoring and policing populations.
But in the future, more capable AI systems, which Altman refers to as AGI, may be able to automate this process, creating powerful new surveillance tools with truly Orwellian potential.
Altman clearly envisages OpenAI acting as a force for good in this story.
In his blog post, his reference to unpopular “decisions and limitations related to AGI safety” suggests a desire to limit the technology’s potential abuse, even if that means restricting it.
Despite having a strong influence on the AGI narrative, partly due to Altman’s own interest in the term, OpenAI is far from the only player shaping the direction of AI development.
Post-DeepSeek, the company’s once unquestionable AI leadership looks increasingly shaky. So, too, does Altman’s belief that “you can spend arbitrary amounts of money and get continuous and predictable gains.”
After all, DeepSeek made a major AI breakthrough that few predicted by spending significantly less than some of its American peers.
As the technology evolves, restraints will need to be negotiated by private corporations and national governments alike.
That includes restraints on companies like OpenAI, whose surveillance machine operates on a different level to state-administered systems but remains formidable nonetheless.