A group of AI experts and tech executives, including Elon Musk and Apple co-founder Steve Wozniak, have issued an open letter urging leading artificial intelligence labs to pause development of AI systems more advanced than GPT-4. The open letter, published on Tuesday by the nonprofit Future of Life Institute, has around 1,000 signatories and cites “profound risks” to human society as the reason for the call to action.
The letter calls for an immediate halt in training of systems for at least six months, which must be public, verifiable and include all public actors. “AI systems with human-competitive intelligence can pose profound risks to society and humanity as shown by extensive research and acknowledged by top AI labs,” the letter begins.
AI experts sound alarm over “profound risks” to human society
The group argues that advanced AI systems could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources. However, they say that this level of planning and management is not happening, despite recent months seeing AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one, not even their creators, can understand, predict or reliably control.
The open letter comes just a couple of weeks after the public debut of OpenAI’s GPT-4, the large language model that powers the premium version of the wildly popular chatbot ChatGPT. The new GPT-4 can handle more complex tasks and produce more nuanced results than earlier versions and is also less subject to the flaws of earlier versions, according to OpenAI.
To do their work, systems like GPT-4 need to be trained on large quantities of data that they can then draw on to answer questions and perform other tasks. ChatGPT, which burst onto the scene in November, has a human-like ability to write work emails, plan travel itineraries, produce computer code and perform well on tests such as the bar exam.
Since the start of the year, companies from Google and Microsoft to Adobe, Snapchat and Grammarly have all announced services that take advantage of those generative AI skills.
However, OpenAI’s own research has shown that there are risks that come with these AI skills. Generative AI systems can quote unreliable sources, or as OpenAI noted, “increase safety challenges by taking harmful or unintended actions, increasing the capabilities of bad actors who would defraud, mislead, or abuse others.”
The group behind the open letter argues that companies are rushing out products without adequate safeguards or even understanding of the implications. AI experts are spooked by where all this might be heading. The letter says, “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.”
The signatories of the letter believe that society needs to take a step back and pause development while they assess the risks and put in place safeguards. They are calling for transparency, public discussion and public engagement around AI developments, so that everyone can have a say in the future of AI.
The open letter has sparked debate among AI experts, with some arguing that a pause is necessary to ensure that AI is developed in a safe and ethical way. Others, however, are concerned that a pause would slow down progress and give other countries an advantage in the development of AI.
Despite the debate, it is clear that AI is advancing at a rapid pace and has the potential to change our lives in profound ways. It is up to society as a whole to ensure that this change is managed in a responsible and ethical way.