Home / News / Technology / AI / OpenAI Eases AI Safety Testing as Sam Altman Flags Authoritarian AGI Risks
AI
3 min read

OpenAI Eases AI Safety Testing as Sam Altman Flags Authoritarian AGI Risks

Published
James Morales
Published
Key Takeaways
  • OpenAI whistleblowers have said the company has cut back its model safety evaluation.
  • Employees are under pressure to complete safety tests faster so that OpenAI can release its model sooner.
  • However, CEO Sam Altman continues to warn about the dangers of AI.

OpenAI CEO Sam Altman has publicly emphasized the need for rigorous AI safety testing to avoid risks such as abuse by authoritarian governments.

Yet, according to whistleblowers cited by the Financial Times, the company has cut model safety evaluations from months to days amid heightened competition from rivals.

OpenAI Accused of Taking Shortcuts

According to one OpenAI employee charged with testing the upcoming o3 model, the company implemented “more thorough safety testing” in the past, when the technology “was less important.”

Despite the fact that the latest models pose a higher risk of being weaponized, “because there is more demand for it, they want it out faster,” they stated. “This is a recipe for disaster.”

Members of OpenAI’s model evaluation team noted that they were given six months to safety test GPT-4. But with OpenAI pushing to release o3 as early as next week, some testers said they would have less than a week to conduct their checks.

Sam Altman on AI Safety

While OpenAI has moved to ease AI safety testing, CEO Sam Altman is well aware of the risks posed by the technology.

In a recent blog post, Altman said that as AGI (artificial general intelligence) approaches, AI developers will need to build more guardrails and constraints to prevent the technology from being abused.

Altman’s approach centers on balance. Going forward, he said it will be important to focus on “individual empowerment” and ensuring equal access to AGI.

“The other likely path we can see is AI being used by authoritarian governments to control their population through mass surveillance and loss of autonomy,” Altman warned.

Authoritarian AI

Although Altman’s fears most immediately recall Beijing’s use of facial recognition and other AI tools for mass surveillance, China isn’t alone in using the technology to monitor populations.

The U.K. government has openly called for police to expand the use of facial recognition to stamp out crime.

Regulators in the country also recently approved a controversial facial recognition tool developed by Meta, sparking fears about the technology’s encroachment into everyday life.

Meanwhile, in the U.S., Elon Musk’s Department of Government Efficiency (DOGE) is reportedly using AI to snoop on federal workers and root out government employees who might be hostile to President Trump’s agenda.

Was this Article helpful? Yes No
Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more
loading
loading