Key Takeaways
ChatGPT-maker OpenAI has lost almost half of the company’s team working on artificial general intelligence (AGI) safety, according to Daniel Koktajlo, one of its former governance researchers.
Kokotajlo, who left the company in April 2023, claims around 30 people were initially working on safety surrounding artificial general intelligence.
However, that number has now dropped to around 16, according to the former employee.
The safety team departures include cofounder John Schulman, Collin Burns, Steven Bills, Yuri Burda, Jan Hendrik Kirchner, Jeffrey Wu, Jonathan Uesato, and Todor Markov.
In an interview with Fortune on Tuesday, Aug. 27, Kokotajlo said the departures were not “like a coordinated thing.”
Instead, the former employee said it was “just people sort of individually giving up.”
“People who are primarily focused on thinking about AGI safety and preparedness are being increasingly marginalized,” Kokotaljo added.
In May, Ilya Sutskever and Jan Leike, the co-heads of the AI safety team at OpenAI, made headlines when they left the company.
In a series of posts announcing his resignation on X , formerly Twitter, Leike said he had been “disagreeing with OpenAI leadership about the company’s core priorities for quite some time.”
Leike claimed safety had “taken a backseat to shiny products“ at the ChatGPT-maker.
Kokotajlo said he believed that OpenAI was “fairly close“ to achieving AGI but is not ready “to handle all that entails.”
The former employee told Fortune that he suspects this led to a “chilling effect“ on those attempting to publish research on its risks.
Kokotajlo said OpenAI’s communications and lobbying wings were exerting an “increasing amount of influence” over what could and couldn’t be published.
CCN reached out to OpenAI for comment but did not receive a reply by the time of publication.
AGI refers to a type of AI that can understand, learn, and apply knowledge at a human-like level across a wide range of tasks.
Unlike current AI systems, which are designed for specific tasks, AGI aims to be versatile, adaptable, and capable of performing any intellectual task that a human being can do.
AGI is also not be limited to pre-programmed instructions, it will be able to learn from new experiences, generalize from those experiences, and apply its knowledge to new and unforeseen situations.
Despite these grand and frankly frightening ambitions for AGI, many AI leaders believe the hype is overblown.
Yann LeCun, Meta’s chief scientist, previously said he believes AI is decades from reaching even a slither of human sentience.
″[If] you think AGI is in, the more GPUs you have to buy,“ LeCun said. The comment marked a slight jab at Nvidia’s CEO Jensen Huang, who famously claimed that AI would be “fairly competitive“ with humans in 5 years.
LeCun believes training large language models on text data is insufficient to create AGI.
“Train a system on the equivalent of 20,000 years of reading material, and they still don’t understand that if A is the same as B, then B is the same as A,“ he said. “There’s a lot of really basic things about the world that they just don’t get through this kind of training.”