OpenAI, the AI research laboratory, recently released a statement addressing safety concerns surrounding its products. The statement comes in response to several controversies that have emerged in recent weeks, including a call from industry leaders to pause the development of models like GPT-4, a ban on ChatGPT in Italy, and a complaint to the Federal Trade Commission about the risks of dangerous misinformation, particularly to children.
OpenAI’s safety pledge aims to assure the public that its products are safe while admitting there is still room for improvement. The company emphasized that it works to ensure safety is built into its systems at all levels and is committed to collaborating with policymakers and stakeholders to create a safe AI ecosystem.
However, the statement lacked details on how OpenAI plans to mitigate risk, enforce its policies, or work with regulators.
Elon Musk and Steve Wozniak have published an open letter calling for a six-month pause
A hybrid rebuttal-apology that was released on Thursday sought to both reassure the public that the company’s goods are safe and acknowledge that there is space for growth.
The safety pledge from OpenAI sounds like a whack-a-mole reaction to all the recent controversies. In the past week alone, ChatGPT has been outright banned in Italy, Elon Musk and Steve Wozniak have published an open letter calling for a six-month pause in the development of models like GPT-4, and a complaint has been made to the Federal Trade Commission alleging that ChatGPT poses dangerous misinformation risks, particularly to children.
Oh, and there was that glitch that allowed users’ chat communications and private information to be seen by anyone.
According to OpenAI, it tries “to ensure safety is built into our system at all levels.” Before releasing GPT-4, OpenAI spent over six months “rigorously testing” it and said it is looking into verification alternatives to implement its over-18 age requirement.
The business emphasized that it does not sell confidential information and only uses it to enhance AI models. It also stated its readiness to work with lawmakers and to continue working with AI stakeholders “to create a safe AI ecosystem.”
OpenAI admitted near the middle of the safety pledge that creating a safe LLM requires real-world feedback. It claims that learning from public feedback will make the models more secure and will enable OpenAI to monitor misuse.
“Real-world use has also led us to develop increasingly nuanced policies against behavior that poses a genuine risk to people while allowing for the many beneficial applications of our technology”.
OpenAI offered “details about approach to safety,” but aside from the pledge to investigate age verification, the majority of the statement reads like standard cliches. There was little information about how it intends to reduce risk, implement policies, or collaborate with authorities.
OpenAI takes pleasure in creating AI products in a transparent manner, but the announcement offers little detail about what it plans to do now that its AI is in the wild.
What is the logic behind OpenAI?
According to OpenAI, their ChatGPT model, which was taught using the Reinforcement Learning from Human Feedback (RLHF) machine learning method, can simulate dialogue, respond follow-up inquiries, confess errors, challenge incorrect premises, and refuse inappropriate requests.
Initially, human AI teachers provided the model with interactions in which they acted as both the customer and an AI assistant. The public-testing version of the bot tries to comprehend user queries and replies with in-depth responses that resemble human-written text in a conversational format.
If you haven’t experienced ChatGPT yet, you can do so right now by clicking this.