Key Takeaways
OpenAI Senior Advisor for AGI Readiness Miles Brundage will leave his role on Friday, Oct. 25, marking the latest in a string of exits at the company.
Although Brundage’s departing statement was carefully worded not to suggest any grievances with his employer, his emphasis on “gaps” in AGI readiness points to emerging potential conflicts between private companies like OpenAI, policymakers, and researchers.
In a company dominated by engineers and technical experts, Brundage was one of a small group of purely policy-oriented researchers at OpenAI.
The teams he led (Policy Research and then AGI Readiness) shaped OpenAI’s safety processes, implementing initiatives such as the external red teaming program and promoting the use of system cards.
Explaining his decision to leave, Brundage cited a desire to influence AI development outside the industry, where he could continue his research without conflicts of interest and potential bias.
He also suggested that OpenAI’s publication review process had “become too much” and started to constrain his ability to freely publish research.
Going forward, Brundage said he plans to start a new nonprofit or join an existing one to work on AI policy research and advocacy.
Brundage’s announcement comes almost exactly a month after CTO Mira Murati declared her intention to leave OpenAI.
Having steered the company through a transformational period and overseen the development of breakthrough models like GPT-4, Murati acknowledged that “there is never an ideal time to step away.” However, she said the current moment ”feels right” as the launch of OpenAI’s o1 “marks the beginning of a new era.”
Although she didn’t reveal a precise timeline for her departure, Murati said she would be focused on ensuring a “smooth transition” for the duration of her employment.
On the same day that Murati announced her intention to step down, OpenAI CEO Sam Altman shared that Chief Research Officer Bob McGrew and Vice President for Research Barret Zoph will also be leaving.
“Mira, Bob, and Barret made these decisions independently of each other and amicably,” he said. “But the timing of Mira’s decision was such that it made sense to now do this all at once so that we can work together for a smooth handover to the next generation of leadership.”
Other significant exits this year include Ilya Sutskever, who left to start a new safety-focused AI venture. Meanwhile, co-founders John Schulman and Jan Leike have taken up new roles at OpenAI rival Anthropic.
Sutskever, Schulman, and Leike had worked on OpenAI’s superalignment initiative, but their departure points to internal rifts over AI safety, and the department has since been disbanded.
As evidence of the divide, OpenAI Researcher Colin Burns, who left the company in April, recently claimed that as many as half of the employees focused on AI safety and preparedness had resigned in frustration at their work being marginalized.
The OpenAI board responsible for overseeing the company has also been caught up in the debate over AI safety.
Helen Toner and Tasha McCauley, who stepped down from the board last year in protest against Altman’s leadership, have argued that profit-driven AI companies shouldn’t be trusted to govern themselves.
The ex-board members have been highly critical of OpenAI’s culture and warned that the company lacks effective oversight.