Home / News / Technology / OpenAI Shares Blueprint for Protecting Research Supercomputers Amidst Transparency Scrutiny
Technology
3 min read

OpenAI Shares Blueprint for Protecting Research Supercomputers Amidst Transparency Scrutiny

Published
Samantha Dunn
Published

Key Takeaways

  • OpenAI has received public and regulatory scrutiny over its data-sourcing practices.
  • The tech company shared some details on the security architecture of their research supercomputers.
  • OpenAI also shared that it is hiring for a number of research and security related roles.

OpenAI has unveiled its security framework designed to safeguard its AI training supercomputers, which appears to be an attempt to address critical security challenges amidst ongoing scrutiny over transparency and data use.

These efforts by the AI company are designed to protect intellectual property from unauthorized access and theft.

This infrastructure update comes as OpenAI faces significant criticism regarding its data handling practices.

OpenAI Shares Security Update

In a recent blog post , OpenAI detailed the measures it has implemented to protect its AI infrastructure. The framework leverages Azure and Kubernetes for secure orchestration, employs multi-layered defense strategies, and integrates tools like AccessManager for stringent identity and access management.

OpenAI shared the details of its security architecture, adding that it uses internal and external red teams to test its security controls.

 

OpenAI CTO Opens Company Up to Investigation

The AI company has faced rising concerns about its transparency issues, particularly following its decision to withhold specific details about the training data used for its latest model, GPT-4. Critics argue that this lack of transparency undermines trust and raises questions about the ethical use of data​​​​ and its source.

OpenAI CTO Mira Murati said in an interview that Sora was trained on publicly available and licensed data. When pressed further on the exact sources Murati was unable to confirm where this data came from.

Adding to the controversy, there have been allegations of data misuse. For instance, OpenAI’s previous policies allowed for the use of customer data for service improvements without explicit consent, a practice that has since been revised following public outcry. The company now asserts  that customer data will not be used for model training unless users opt-in, a move intended to rebuild trust among its user base​​.

The FTC is currently investigating OpenAI with a specific focus on its training dataset.

Bolstering its Security Team

The recent blog post included job openings for research focused security engineers. The inclusion of these job openings suggests that OpenAI is not only committed to advancing AI but is also proactively working to mitigate risks and avoid potential legal challenges related to data misuse and security breaches​

Given the pressures OpenAI is currently facing, being transparent about its security measures and how it protects data may be a strategic move to align it with regulatory and public trust.

Was this Article helpful? Yes No

Samantha Dunn

Samantha started as a traditional writer and journalist before falling down the Web3 rabbit hole. She now explores the ways in which emerging technology is impacting economies, industries, and the individual.
See more