Home / News / Technology / Five Senate Democrats Demand Clarity From OpenAI in 12-Question Letter
Technology
5 min read

Five Senate Democrats Demand Clarity From OpenAI in 12-Question Letter

Published
Giuseppe Ciccomascolo
Published

Key Takeaways

  • Five Democratic senators requested OpenAI to provide data on its safety and security measures.
  • The 12-questions letter follows employee warnings about rushed safety-testing of its latest AI model.
  • Several employees left the ChatGPT creator, citing a lack of transparency and a hostile work environment.

Five Democratic senators have asked OpenAI to provide data on its safety and security measures following employee warnings about rushed safety testing of its latest artificial intelligence (AI) model.

This is not the first time that the company behind ChatGPT has faced issues with the US Senate, as Sam Altman already testified on AI last year. Now, OpenAI has less than one month to provide an official response that satisfies senators’ requests.

Democrats Letter To OpenAI

Five Democrats senators, led by Sen. Brian Schatz from Hawaii, have demanded  that OpenAI provide data on its safety and security efforts following employee warnings about rushed safety-testing of its latest AI model.

They asked OpenAI CEO Sam Altman to detail how the company will meet its public commitments to avoid harm, such as aiding bioweapons or cyberattacks and requested information on employee agreements that might have silenced workers. This follows allegations from whistleblowers that OpenAI issued restrictive agreements.

Lawmakers have urged OpenAI not to enforce nondisparagement agreements and to allow independent experts to assess its AI systems before release. They also requested details on misuse and safety risks observed with recent models and documentation on how OpenAI will meet its safety commitments by Aug 13, 2024.

Stephen Kohn, a lawyer for OpenAI whistleblowers, argued  that the senators’ requests are “not sufficient” to address the chilling effect that prevents employees from speaking out about company practices. He questioned, “What steps are they taking to change that cultural message and make OpenAI an organization that welcomes oversight?”

What OpenAI Says

In response, OpenAI stated  it had removed nondisparagement terms from staff agreements and that it did not cut corners on safety, despite employee concerns about the rushed launch of GPT-4 Omni. The senators highlighted the importance of public trust in OpenAI’s governance, safety testing, employment practices, and cybersecurity policies.

An OpenAI spokesperson, Liz Bourgeois, said  that the company’s commitment to dedicate 20% of its computing power to safety doesn’t relate to a single safety team. Instead, this allocation will be distributed over multiple years, with increasing resources as the technology evolves.

OpenAI is expected to respond by August 13 with documentation on how it plans to meet its voluntary pledge to the Biden administration to protect the public from abuses of genAI.

This is not the first clash between the US Senate and the company behind ChatGPT. Last year, Altman testified before a Senate subcommittee, agreeing on the need to regulate the powerful AI technology developed by his company and others like Google and Microsoft.

During his first testimony  before Congress, Altman urged lawmakers to regulate AI, as the committee members demonstrated a growing understanding of the technology. The hearing highlighted the deep unease among technologists and government officials about AI’s potential harms. However, Altman faced a relatively friendly audience in the subcommittee.

OpenAI Battles With Former Employees

Several employees left OpenAI, accusing the company of lacking transparency and the CEO Sam Altman of fostering a hostile work environment. Leopold Aschenbrenner, a technical program manager, criticized OpenAI’s security measures as inadequate against potential unauthorized access by foreign entities seeking sensitive information.

After being dismissed for leaking information, which he claimed was politically motivated, Aschenbrenner publicly raised concerns about the company’s security practices.

William Saunders, another former employee, left OpenAI because he feared its research could pose significant threats to humanity. He likened OpenAI’s trajectory to the Titanic disaster.

While Saunders is not concerned about the current version of ChatGPT, he fears future versions and the development of AI that could surpass human intelligence. He believes AI workers have a duty to warn the public about dangerous AI developments.

Was this Article helpful? Yes No

Giuseppe Ciccomascolo

Giuseppe Ciccomascolo began his career as an investigative journalist in Italy, where he contributed to both local and national newspapers, focusing on various financial sectors. Upon relocating to London, he worked as an analyst for Fitch's CapitalStructure and later as a Senior Reporter for Alliance News. In 2017, Giuseppe transitioned to covering cryptocurrency-related news, producing documentaries and articles on Bitcoin and other emerging digital currencies. He also played a pivotal role in establishing the academy for a cryptocurrency exchange website. Crypto remained his primary area of interest throughout his tenure as a writer for ThirdFloor.
See more