Home / News / Technology / Jan Leike’s Resignation Damning of OpenAI’s “Core Priorities” and Safety Culture
Technology
4 min read

Jan Leike’s Resignation Damning of OpenAI’s “Core Priorities” and Safety Culture

Published
James Morales
Published

Key Takeaways

  • Jan Leike has resigned from OpenAI.
  • Alongside co-founder Ilya Sutskever, Leike was one of the voices at OpenAI calling for a greater emphasis on safety.
  • He says that in recent months his department had been “sailing against the wind.”

OpenAI has lost one of its most respected researchers, Jan Leike, who headed up the company’s “superalignment” team.

Explaining the reasons for his departure, Leike blamed disagreements with the company’s leadership over what OpenAI’s core priorities should be.

Safety vs. Speed: OpenAI’s Culture Clash

In a series of posts on X, Leike exposed major internal disagreements over OpenAI’s role and responsibilities in AI safety.

Prior to his departure, he said his department had been “sailing against the wind,” even struggling to access the necessary resources to carry out its research, despite originally being allocated  20% of OpenAI’s available compute.

“I believe much more of our bandwidth should be spent getting ready for the next generations of models,” Leike emphasized. Specifically, he expressed his view that OpenAI should be more focused on areas such as safety, preparedness and the societal impact of AI.

Leike’s resignation follows that of Chief Scientist Ilya Sutskever, a co-founder who championed the work on superalignment, i.e. developing AI that aligns with human values.

The recent departures expose an internal rift between Leike and Sutskever’s camp, which prioritizes safety, and those who would rather develop and release new technology faster. 

Given that Sutskever was a central figure in previous efforts to oust CEO Sam Altman, his decision to leave suggests that tensions at the company have been simmering ever since.

Altman’s Leadership Under Renewed Scrutiny

While he didn’t mention Altman by name, it’s hard not to identify Leike’s criticism as a direct shot at OpenAI’s CEO.

From the upcoming video generator SORA to the latest GPT-4 update, OpenAI’s product runway appears to have accelerated since Altman solidified his control of the company after Sutskever’s failed coup.

Although he tends to emphasize responsible AI in his public statements, some disagree with Altman’s priorities. Hinting at a behind-the-scenes battle over the firm’s direction, Leike’s statement that “safety culture and processes have taken a backseat to shiny products,” says it all.

AI Safety and Commercial Interests

While the battle for OpenAI’s soul can be viewed as one of safety versus speed, there is another way to frame the argument. 

In a competitive technology race that pits OpenAI against Google and other potential rivals, whoever releases new products and features first gains a commercial advantage in the lucrative AI market. From this perspective, Leike’s approach threatens to hold the company back.

And the former head of superallignment isn’t the only one concerned about OpenAI’s growing emphasis on profitability.

According to its critics, since partnering with Microsoft, OpenAI’s commercial interests have come to overshadow its founding principles. 

This is the central argument of Elon Musk’s lawsuit against the company, which alleges that Altman breached an agreement to run OpenAI as a non-profit organization.

In contrast with its original emphasis on open-source development for the betterment of humanity, the suit claims OpenAI has been “transformed into a closed-source de facto subsidiary of the largest technology company in the world.”

Was this Article helpful? Yes No

James Morales

Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more