OpenAI revealed ChatGPT data breach was caused by a weakness in the code’s open-source framework in its chatbot, which was rectified quickly and seemingly with minimal consequence, but the incident could be a foreshadowing of future threats to chatbots and users, and OpenAI is taking precautions to avoid future data breaches in the application.
When ChatGPT and other chatbots initially became publicly available, the cybersecurity community was concerned about how AI technology may be exploited to conduct cyberattacks. It didn’t take long for threat actors to find out how to circumvent the safety checks and utilize ChatGPT to write malicious code.
The tables appear to have turned. Instead of employing ChatGPT to induce cyber disasters, attackers have now activated the technology itself. According to Security Week, OpenAI, the company that created the chatbot, revealed a ChatGPT data breach in the system caused by a weakness in the code’s open-source framework. The ChatGPT data breach knocked the service offline until it could be repaired.
ChatGPT’s popularity was obvious from the moment it was released in late 2022. Everyone wanted to play with the chatbot, from authors to software professionals. Despite its flaws (some of its prose was clunky or plagiarized), ChatGPT quickly became the fastest-growing consumer app in history, with over 100 million monthly users by January. Within a month of its release, approximately 13 million people used the AI technology daily. In comparison, TikTok, another hugely popular app, took nine months to attain comparable user numbers.
ChatGPT data breach
It’s just a matter of time until a popular app or technology is targeted by threat actors. In the instance of the ChatGPT data breach, the exploit was made possible via a flaw in the open-source Redis library. Users may now view the conversation history of other current users.
According to Heavy.AI, open-source libraries are used to “develop dynamic interfaces by storing readily accessible and frequently used routines and resources, such as classes, configuration data, documentation, help data, message templates, pre-written code and subroutines, type specifications, and values.”
Redis is used by OpenAI to store user information for faster recall and access. Because thousands of contributors create and use open-source code, vulnerabilities can easily emerge and go unnoticed. Because threat actors are aware of this, attacks on open-source libraries have increased by 742% since 2019.
The ChatGPT vulnerability was small in the larger scheme of things, and OpenAI corrected the flaw within days of its discovery. However, even minor cyber incidents can cause significant damage.
However, that was only a minor incident. ChatGPT data breach researchers dug deeper, they discovered that the same vulnerability was likely to blame for the visibility of payment information for a few hours before ChatGPT was taken offline.
“Some users could see another active user’s first and last name, email address, payment address, last four digits of a credit card number (only), and credit card expiration date.” “No full credit card numbers were ever exposed,” OpenAI said in a statement regarding the incident.
Chatbots, AI, Cybersecurity, and ChatGPT data breach
ChatGPT data breach was rectified quickly and seemingly with minimal consequence, with impacted paying customers accounting for fewer than 1% of its users. However, the incident could be a foreshadowing of future threats to chatbots and users.
There are already privacy issues regarding the usage of chatbots. Mark McCreary, the co-chair of Fox Rothschild LLP’s privacy and data security practice, told CNN that ChatGPT and chatbots are analogous to the black box in an airliner. AI technology accumulates massive quantities of data and then utilizes that data to produce answers to queries and prompts. Furthermore, anything in the chatbot’s memory becomes available to other users.
Chatbots, for example, may capture a single user’s notes on any topic and then summarize or search for further details. However, if those notes contain sensitive data — such as an organization’s intellectual property or sensitive customer information — they are added to the chatbot library. The user has lost control of the information.
AI Restrictions are being tightened
Some businesses and entire countries are cracking down on privacy concerns. JPMorgan Chase, for example, has limited its workers’ usage of ChatGPT owing to the company’s restrictions over third-party software and apps, but there are also worries about the security of financial information entered into the chatbot. And Italy cited citizen data protection as the reason for temporarily blocking the program across the country. Officials claimed that the worry stems from GDPR compliance.
Threat actors are also expected to employ ChatGPT to send sophisticated and realistic phishing emails, according to experts. Poor language and strange phrase construction are no longer indicators of a phishing scheme. Chatbots will now emulate natural speakers by sending customized messages. ChatGPT is capable of flawless language translation, which will be a game changer for enemies from other countries.
The use of AI to produce misinformation and conspiracy campaigns is an equally harmful technique. The consequences of this practice may extend beyond cyber dangers. Researchers utilized ChatGPT to create an op-ed that resembled anything seen on InfoWars or other well-known websites that promote conspiracy theories.
Threats are being addressed by OpenAI
Chatbot data breach development will bring new cyber risks, either through more complex linguistic abilities or their popularity. As a result, the technology is a prime target for an attack vector. To that aim, OpenAI is taking precautions to avoid future data breaches in the application. It will pay a bug bounty of up to $20,000 to anyone who finds previously unknown vulnerabilities.
However, according to The Hacker News, “the program does not cover model safety or hallucination issues, wherein the chatbot is prompted to generate malicious code or other faulty outputs.” So it appears that OpenAI wishes to harden the technology against external attacks while doing little to prevent the chatbot from becoming a source of cyberattacks.
ChatGPT and other chatbots will be key actors in the realm of cybersecurity. Only time will tell whether the technology will be the target of assaults or the generator of them.
Data leaks and cyberattacks are among today’s most serious issues. Don’t you agree? Take a look at these: