Home / News / Technology / Ex-OpenAI Board Members Say AI Companies Can’t Withstand Pressure of “Profit Incentives” as Sam Altman’s Controversies Mount
5 min read

Ex-OpenAI Board Members Say AI Companies Can’t Withstand Pressure of “Profit Incentives” as Sam Altman’s Controversies Mount

Last Updated May 27, 2024 2:07 PM
Giuseppe Ciccomascolo
Last Updated May 27, 2024 2:07 PM
Key Takeaways
  • Two former OpenAI board members resigned over concerns about CEO Sam Altman’s leadership and the company’s direction.
  • Toner and McCauley believe government intervention is essential to ensure AI development is safe and beneficial.
  • The former board members believe profit motives can compromise a company’s commitment to safety.
  • Altman is also under the spotlight for reported threats to exiting employees and his vision of AI.

Two former OpenAI board members have expressed that artificial intelligence (AI) companies cannot be trusted to govern themselves and that third-party regulation is essential for accountability.

Helen Toner and Tasha McCauley believe that self-governance is unlikely to hold up against profit-driven pressures. This statement adds another layer of controversy for OpenAI following the recent issues surrounding Sam Altman.

AI Companies Can’t Govern Themselves

Helen Toner and Tasha McCauley, former board members of OpenAI, stepped down in November during a tumultuous attempt to remove CEO Sam Altman. Five months later, Altman swiftly returned as CEO and to the board.

In an op-ed on The Economist  on Sunday, Toner and McCauley stood by their decision to remove Altman. They cited accusations from senior leaders that he fostered a “toxic culture of lying” and engaged in behavior that could be “characterized as psychological abuse.”

Since Altman’s return to the board in March, OpenAI’s commitment to safety has come under scrutiny, particularly following using an AI voice resembling actor Scarlett Johansson for Chat GPT-4.

Toner and McCauley argued that OpenAI cannot be trusted to hold itself accountable with Altman at the helm. “Developments since his return – including his reinstatement to the board and the departure of senior safety-focused talent – bode ill for the OpenAI experiment in self-governance,” they wrote.

The former board members stressed the necessity of government intervention to establish effective regulatory frameworks. They asserted that it’s impossible to achieve OpenAI’s mission to benefit “all of humanity” without external oversight. They acknowledged that while they once believed in OpenAI’s ability to self-govern, their experience demonstrated that self-governance cannot reliably withstand profit-driven pressures.

Ex-OpenAI Directors Ask For AI Regulation

Toner and McCauley acknowledged that while government regulation is necessary, poorly designed laws could stifle “competition and innovation” by imposing burdens on smaller companies.

“It is crucial that policymakers act independently of leading AI companies when developing new rules,” they wrote. “They must be vigilant against loopholes, regulatory ‘moats’ that shield early movers from competition, and the potential for regulatory capture.”

In April, the Department of Homeland Security announced  the establishment of the artificial intelligence Safety and Security Board, which aims to provide recommendations for the “safe and secure development and deployment of AI” across the United States’ critical infrastructures.

To date, the European Union is the only institution to have passed comprehensive AI regulation. The European Parliament recently approved the AI Act, now a law regulating AI in the EU, though not without challenges. It remains unclear if – and how – other institutions will follow suit.

Altman Wants AI To Know All Of Us

Sam Altman has also ended up under the spotlight for other controversies. Vox reported  that employees at OpenAI who wished to leave the company faced expansive and highly restrictive exit documents. If they refused to sign promptly, they received threats of losing their vested equity in the company. This represents a severe and uncommon provision in Silicon Valley. This policy effectively forced former employees to choose between forfeiting potentially millions of dollars in earned equity or agreeing to a perpetual non-disparagement clause.

The news reportedly caused significant turmoil within OpenAI, a private company valued at approximately $80 billion. Like many Silicon Valley startups, OpenAI’s employees receive a substantial portion of their compensation through equity. They typically expect that, once their equity has vested according to their contract’s schedule, it is irrevocably theirs.

Altman’s vision on AI also made the headlines recently. He envisions a future where AI has a significant integration into daily life. In an interview  with MIT Technology Review, Altman described the ideal AI as a “super-competent colleague” that is intimately familiar with every detail of one’s life, including every email and conversation.

He emphasized that such AI would be proactive, handling simpler tasks instantly and tackling more complex ones with minimal user input. Altman’s vision extends beyond chatbots; he envisions AI that actively assists in accomplishing real-world tasks.

Was this Article helpful? Yes No