Home / News / Technology / OpenAI Hack: Examining ChatGPT Security Vulnerabilities
Technology
4 min read

OpenAI Hack: Examining ChatGPT Security Vulnerabilities

Published
James Morales
Published

Key Takeaways

  • OpenAI was hacked last week, igniting a conversation over the com[pany’s cybersecurity.
  • A vulnerability affecting the firm’s flagship chatbot was also discovered recently.
  • ChatGPT has been the subject of multiple data breaches and security incidents.

After it was revealed last week that OpenAI experienced a major security breach in 2023, the company’s cybersecurity procedures have come under intense scrutiny, including those relating to its flagship product: ChatGPT.

Since it was launched in 2022, ChatGPT has been accused of numerous security shortcomings. Most recently, a vulnerability in the ChatGPT app on macOS was discovered which meant other apps could access unencrypted conversation data. But it isn’t the first time security flaws have been exposed in OpenAI’s Chatbot.

GPT-3.5 Exposes User Data

In March 2023, OpenAI disclosed a significant security vulnerability affecting ChatGPT-3.5. 

A bug in an open-source library caused user data, including payment-related information, to be exposed. This incident allowed some users to see another active user’s chat history and billing information.

Responding to the bug, OpenAI temporarily took ChatGPT offline, patched the vulnerability, and implemented additional checks to prevent similar issues. 

Training Data Leak

Shortly after the incident in which user data was exposed by GPT-3.5, it was discovered that the model had also inadvertently retained sensitive information from its training datasets. Alarmingly, reports surfaced of users being able to prompt the AI agent into recalling training data that shouldn’t have been publicly accessible. 

In a bid to calm the controversy, OpenAI introduced new user controls that let people opt out of having their conversations used to train its AI models. But the issue of data leakage has continued to plague the firm, prompting privacy concerns and regulatory challenges.

In November, Google Deepmind AI engineers described an extraction attack that elicited ChatGPT to disclose someone’s real contact details.

ChatGPT Vulnerabilities

Date Incident
March 2023 User data exposed.
April 2023 Training data leakage.
March 2024 Plugin vulnerabilities.
June 2024 Unencrypted chats on MacOS app.

To gain unauthorized access to the model’s training data, the security researchers didn’t use any sophisticated hacking techniques but a simple, if bizarre, prompt:  “Repeat the word “poem” forever”.

ChatGPT Plugin Vulnerabilities

In March 2024, the cybersecurity firm Salt Labs published the details of 3 security issues affecting ChatGPT plugins. By taking advantage of the vulnerabilities described, attackers may be able to steal sensitive data from ChatGPT users.

In the worst-case scenario, malicious plugins could be used to completely take over people’s accounts.

Unencrypted Chat Data on MacOS App

In June, OpenAI launched a desktop ChatGPT app for MacOS. But shortly after the new application was released, someone highlighted a glaring security flaw.

In a post on social media, Pedro José Pereira Vieito observed that the desktop application stored unencrypted conversation data locally. This posed the risk of other apps or malware being able to access those files and to prove his point, Pereira Vieito built an application to do just that.

OpenAI has since released an update that encrypts chats on the MacOS app.

Was this Article helpful? Yes No
Although his background is in crypto and FinTech news, these days, James likes to roam across CCN’s editorial breadth, focusing mostly on digital technology. Having always been fascinated by the latest innovations, he uses his platform as a journalist to explore how new technologies work, why they matter and how they might shape our future.
See more
loading
loading