Home / News / Technology / OpenAI Introduces New Position to Combat Insider Threats, Enhancing Collaboration with White House
Technology
4 min read

OpenAI Introduces New Position to Combat Insider Threats, Enhancing Collaboration with White House

Last Updated March 26, 2024 10:55 AM
James Morales
Last Updated March 26, 2024 10:55 AM

Key Takeaways

  • OpenAI is looking to hire an insider risk investigator to help mitigate internal security threats.
  • Google and other AI developers have been subjected to corporate espionage attempts.
  • The White House has pushed AI labs to ramp up their internal security measures.

In 2023, the Biden administration met with 7 leading American technology firms to discuss how they could manage the risks posed by AI. Following those meetings, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI committed to investing in cybersecurity and “insider threat safeguards” to help prevent unreleased model weights from falling into the wrong hands.

As a result of its commitment to the White House, OpenAI is currently seeking  a “seasoned Insider Risk Investigator” to help identify and protect against internal security threats. 

What Does an Insider Risk Investigator Do?

Alongside the related role of an insider threat analyst, the function of an insider risk investigator is to prevent sensitive information from being compromised by company insiders. 

These days, most large corporations have dedicated employees to manage insider risks. Some threats are posed by malicious actors. But careless data handling and other security shortcomings can be just as dangerous.

OpenAI’s new investigator will be expected to detect and analyze potential insider threats and investigate any suspicious activities they uncover.

The successful candidate will play a crucial role in safeguarding OpenAI’s assets by analyzing anomalous activities, promoting a secure culture, and interacting with various departments to mitigate risks.

AI Secrets Targeted by Corporate Espionage

Some of the firms that joined the White House accord along with OpenAI have already been targets of corporate espionage. 

Insiders can compromise security by taking sensitive information with them when they leave one firm to join another. But some corporate spies have been caught working for the competition while still employed by the target company.

Just this month, a Google AI engineer was arrested for stealing trade secrets while secretly working for 2 Chinese companies. 

Of course, American firms being targeted for their technological leadership is nothing new. But the government’s proactive intervention recognizes that AI risks don’t just have economic implications. They have national security ones too.

At the government’s request, OpenAI committed  to enhanced internal and external security testing of AI systems that could raise national security concerns. The company promised to give significant attention” to “bio, chemical, and radiological risks. For instance, the ways in which systems can lower barriers to entry for weapons development, design, acquisition, or use.

Addressing AI Risks

Building on OpenAI’s previous commitments, in October, the firm launched  a new “catastrophic risk preparedness” initiative to coincide with an international AI Safety Summit  held in the UK.

The Summit centered on future threats posed by “Frontier AI” – highly capable, general-purpose AI systems that can exceed today’s most advanced models.

Alongside Anthropic, Google and Microsoft, OpenAI have formed  the Frontier Model Forum (FMF), an industry body focused on anticipating and preventing AI dangers.

Following the AI Safety Summit, OpenAI also announced  the formation of a new preparedness team. 

“The team will help track, evaluate, forecast and protect against catastrophic risks,” the company stated in November. 

Specifically, it highlighted cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats, autonomous replication and adaptation (ARA) and “individualized persuasion” as some of the most pressing concerns for AI safety.

Prioritizing Cybersecurity

As well as investing to protect sensitive model weights from insider threats, the AI firms that convened in Washington agreed to beef up vulnerability testing to identify security issues before models are released to the public.

In addition to subjecting the OpenAI API to annual third-party penetration testing, the company has an ongoing bug bounty program that rewards security researchers and white hat hackers for reporting weaknesses.

Was this Article helpful? Yes No