OpenAI, suffered a privacy breach earlier this week as a software bug in an open-source library caused an OpenAI leak which led to the conversation histories of random users being open to the public. On Monday morning, several people noticed that the chat history function was displaying unfamiliar past conversations from seemingly other people.
This was followed by an outage on the same day. Initially, OpenAI remained quiet on the situation, but on Wednesday, the CEO, Sam Altman, confirmed the data exposure was real and expressed his apologies.
You may see Sam Altman‘s apology tweet about the OpenAI leak below.
we had a significant issue in ChatGPT due to a bug in an open source library, for which a fix has now been released and we have just finished validating.
a small percentage of users were able to see the titles of other users’ conversation history.
we feel awful about this.
— Sam Altman (@sama) March 22, 2023
ChatGPT is a program that archives and displays past conversations for its users, giving them a running record of all their text inputs. While some people were able to view the titles of other users’ conversation histories during the leak, it is not clear whether sensitive personal information was exposed. The good news is that OpenAI released a patch to fix the bug, which has since been validated.
OpenAI leak is not the only problem the company has
However, users’ chat histories for Monday, March 20, may have been lost, and they will not be able to access their chat history from 1 am PDT until 10 am PDT during that day. Altman mentioned that the company plans on providing more details through a “technical postmortem,” but for now, the conversation history sidebar has been down for ChatGPT since Monday.
Interestingly, OpenAI has yet to address another apparent bug that caused the payment form for ChatGPT Plus to seemingly leak the email addresses of other users on Monday morning. Unlike the OpenAI leak situation, the company has not responded to requests for comment on this matter.
In conclusion, OpenAI’s recent privacy snafu highlights the challenges of securing user data in AI-driven products. The incident underscores the importance of continuous monitoring and testing of software to identify and address vulnerabilities before they can be exploited.