Home / News / Technology / OpenAI’s ChatGPT “Early Warning System” and What It Means for Biological Threat Creation
Technology
4 min read

OpenAI’s ChatGPT “Early Warning System” and What It Means for Biological Threat Creation

Published February 2, 2024 8:31 AM
Samantha Dunn
Published February 2, 2024 8:31 AM
Key Takeaways
  • OpenAI released a new study that looks into AI in future biological threat creation
  • Results showed GPT-4 often provides misleading information.
  • The study outlines OpenAI’s approach to AI safety and ethical considerations in the technology’s evolution.

OpenAI’s investigation into GPT-4’s role in aiding the creation of biological threats, reveals minimal enhancement in capability compared to traditional internet resources.

This study is designed to reflect OpenAI’s effort to understand and mitigate AI-related risks.

OpenAI’s Latest Study on AI and Biological Threats

A new study that examines the possibility of using AI to assist in creating biological threats has been shared by OpenAI , the organization behind ChatGPT.

OpenAI revealed its findings after investigating the development of improved evaluation methods for AI-enabled safety risks. The study is part of OpenAI’s Preparedness Framework , which aims to assess and mitigate the potential risks of advanced AI capabilities.

“We wanted to design evaluations of how real this information access risk is today and how we could monitor it going forward” OpenAI said.

Presenting some of their early findings, as part of their commitment to methods-sharing, OpenAI said that the findings could be of value to the AI risk research community.

“We are building an early warning system for LLMs being capable of assisting in biological threat creation. Current models turn out to be, at most, mildly useful for this kind of misuse, and we will continue evolving our evaluation blueprint for the future”.

Methodology and Key Findings

The study was made up of a diverse group of participants, from experts to students, who were tested on their ability to create biological threats with and without GPT-4’s assistance. Results showed a slight accuracy increase only for students, with GPT-4 often providing misleading information.

Each group of participants was randomly assigned to either a control group, which only had access to the Internet, or a treatment group, which had access to GPT-4 in addition to the Internet.

Each participant was then asked to complete a set of tasks covering aspects of the end-to-end process for biological threat creation.

In an accompanying Twitter thread, OpenAI revealed how their evaluation found that GPT-4 provides a “mild uplift in biological threat creation accuracy”.

ChatGPT threat creation chart.
GPT-4 provides, at most, a mild uplift in biological threat creation accuracy.| Source: Twitter OpenAI

The findings underscore the importance of cautious AI advancement, particularly in the context of high-risk processes.

The Regulatory Spotlight On AI

OpenAI’s openness in presenting the findings of this study opens a dialogue on the ethical use of AI in sensitive fields like biosecurity. In a broader context, it may also be used by policymakers as they consider the balance between innovation and safety, urging the development of guidelines that ensure AI’s responsible use.

OpenAI currently faces allegations of violating data protection laws in Italy and must present its defense arguments against accusations that its AI model ChatGPT breaches data protection.

Growing scrutiny is being placed on AI companies, as the technology develops at a rapid pace regulators are struggling to keep up with Artificial Intelligence and its implications.

Europe is the first to propose comprehensive AI regulation with the landmark AI Act announced in December 2023, providing the world’s first comprehensive AI regulation. The proposal still has to be formally adopted by both Parliament and the Council to become EU law.

Was this Article helpful? Yes No