Samsung ChatGPT leak is the most recent data exposure by the use of the chatbot, which comes at a sensitive time regarding privacy concerns about the language model: Fake ChatGPT extension hijacks Facebook accounts
While using ChatGPT to assist them with tasks, Samsung employees unintentionally leaked sensitive information about the company. The business gave engineers at its semiconductor division access to the AI writer to assist in resolving issues with its source code. However, the employees entered private information, including the source code for new software and meeting minutes, as well as information about their hardware.
As a result of the Samsung ChatGPT leak, there were three documented instances of employees leaking critical information via ChatGPT in less than a month. These trade secrets from Samsung are now essentially in the hands of OpenAI, the firm that created ChatGPT, as it keeps user input data to further train itself.
Precautions after the Samsung ChatGPT leak
According to allegations in the local Korean media, a Samsung employee entered the source code from a malfunctioning semiconductor database into ChatGPT and asked it to assist them in locating a solution. In another instance, a worker divulged a secret code in an effort to mend faulty equipment. According to reports, another employee gave the chatbot control of an entire meeting and instructed it to record the proceedings.
In response, Samsung Semiconductor is currently creating its own internal AI for use by staff members, although they are only allowed to use prompts that are no larger than 1024 bytes.
However, making this process as effective as possible has the potential to save chip firms a significant amount of time in testing and verifying processors, leading to cost reductions as well. In one of the aforementioned cases, an employee requested ChatGPT to optimize test sequences for identifying faults in chips.
Too little too late
The issue with disclosing trade secrets to ChatGPT is that typed questions don’t always vanish after an employee logs off of their computer. According to OpenAI, it may use the information provided by ChatGPT or other user services to enhance its AI models. In other words, unless users specifically opt out, OpenAI retains that data. Because it is “not able to delete specific prompts,” OpenAI expressly cautions users against revealing critical information.
In the wake of the incidents, Samsung Electronics issued a warning to its employees about the potential risks of disclosing private information, stating that such data is now housed on servers owned by OpenAI and cannot be retrieved. Any kind of data leak might be disastrous for the corporation in the fiercely competitive semiconductor industry.
But, ChatGPT users within Samsung aren’t the only ones that overshare. Cyberhaven, a cybersecurity company, recently conducted research and discovered that 3.1% of the users of its AI had ever entered sensitive company information. According to Cyberhaven, a corporation with about 100,000 employees might be sending OpenAI private information hundreds of times each week.
The company doesn’t appear to have any legal options in the face of the Samsung ChatGPT leak to ask that OpenAI retrieve or delete the private information it currently retains. Some have suggested that because this is one of the fundamental principles of the law guiding how businesses gather and use data, ChatGPT is in violation of the EU’s GDPR due to this very point. It is also one of the explanations for why ChatGPT was banned in Italy.