Two former OpenAI board members have expressed that artificial intelligence (AI) companies cannot be trusted to govern themselves and that third-party regulation is essential for accountability.
Helen Toner and Tasha McCauley believe that self-governance is unlikely to hold up against profit-driven pressures. This statement adds another layer of controversy for OpenAI following the recent issues surrounding Sam Altman.
Helen Toner and Tasha McCauley, former board members of OpenAI, stepped down in November during a tumultuous attempt to remove CEO Sam Altman. Five months later, Altman swiftly returned as CEO and to the board.
In an op-ed on The Economist on Sunday, Toner and McCauley stood by their decision to remove Altman. They cited accusations from senior leaders that he fostered a “toxic culture of lying” and engaged in behavior that could be “characterized as psychological abuse.”
Since Altman’s return to the board in March, OpenAI’s commitment to safety has come under scrutiny, particularly following using an AI voice resembling actor Scarlett Johansson for Chat GPT-4.
Toner and McCauley argued that OpenAI cannot be trusted to hold itself accountable with Altman at the helm. “Developments since his return – including his reinstatement to the board and the departure of senior safety-focused talent – bode ill for the OpenAI experiment in self-governance,” they wrote.
The former board members stressed the necessity of government intervention to establish effective regulatory frameworks. They asserted that it’s impossible to achieve OpenAI’s mission to benefit “all of humanity” without external oversight. They acknowledged that while they once believed in OpenAI’s ability to self-govern, their experience demonstrated that self-governance cannot reliably withstand profit-driven pressures.
Toner and McCauley acknowledged that while government regulation is necessary, poorly designed laws could stifle “competition and innovation” by imposing burdens on smaller companies.
“It is crucial that policymakers act independently of leading AI companies when developing new rules,” they wrote. “They must be vigilant against loopholes, regulatory ‘moats’ that shield early movers from competition, and the potential for regulatory capture.”
In April, the Department of Homeland Security announced the establishment of the artificial intelligence Safety and Security Board, which aims to provide recommendations for the “safe and secure development and deployment of AI” across the United States’ critical infrastructures.
To date, the European Union is the only institution to have passed comprehensive AI regulation. The European Parliament recently approved the AI Act, now a law regulating AI in the EU, though not without challenges. It remains unclear if – and how – other institutions will follow suit.
Sam Altman has also ended up under the spotlight for other controversies. Vox reported that employees at OpenAI who wished to leave the company faced expansive and highly restrictive exit documents. If they refused to sign promptly, they received threats of losing their vested equity in the company. This represents a severe and uncommon provision in Silicon Valley. This policy effectively forced former employees to choose between forfeiting potentially millions of dollars in earned equity or agreeing to a perpetual non-disparagement clause.
The news reportedly caused significant turmoil within OpenAI, a private company valued at approximately $80 billion. Like many Silicon Valley startups, OpenAI’s employees receive a substantial portion of their compensation through equity. They typically expect that, once their equity has vested according to their contract’s schedule, it is irrevocably theirs.
Altman’s vision on AI also made the headlines recently. He envisions a future where AI has a significant integration into daily life. In an interview with MIT Technology Review, Altman described the ideal AI as a “super-competent colleague” that is intimately familiar with every detail of one’s life, including every email and conversation.
He emphasized that such AI would be proactive, handling simpler tasks instantly and tackling more complex ones with minimal user input. Altman’s vision extends beyond chatbots; he envisions AI that actively assists in accomplishing real-world tasks.