This week, Miles Brundage, senior adviser for OpenAI’s team working on AGI readiness, left the company making headlines. In a move that underscores growing concerns within the AI community, Brundage issued a clear warning: No, not even OpenAI nor any other leading AI lab is prepared fully for the risks of artificial general intelligence (AGI).
It’s the latest in a series of high-profile exits, which raises a serious question: Is there anywhere AGI can’t potentially affect society? Perhaps even the most advanced labs aren’t ready to manage AGI’s potential impact on society.
I just sent this message to my colleagues, and elaborate on my decision and next steps in a blog post (see next tweet): pic.twitter.com/NwVHQJf8hM
— Miles Brundage (@Miles_Brundage) October 23, 2024
Rising AI safety concerns as OpenAI loses key researchers
The departure of Brundage is the latest in a string of departures at OpenAI, and they have recently put varying degrees of distance between them and the company and the leadership. The departures include CTO Mira Murati, chief research officer Bob McGrew, and research VP Barret Zoph, more so than concrete evidence that the company made its priorities clear. In part, he left Brundage because he wanted to do AI policy research without some of the constraints he’d experienced working at OpenAI, where he had felt increasingly stunted by OpenAI’s desire to limit how openly he could publish. This was followed by the disbanding of OpenAI’s AGI readiness team, and it made people worried about whether the company was committed to long-term safety initiatives.
AGI development, meanwhile, is accelerating faster than society is prepared to handle responsibly, says Brundage in his farewell statement. “Neither OpenAI nor any other frontier lab is ready for AGI, and the world is also not prepared,” he wrote, adding that while OpenAI’s leadership may agree with his assessment, the company’s shift toward commercialization could jeopardize its safety-focused mission. OpenAI has reportedly experienced pressure to monetize its work, moving from nonprofit research to a for-profit model and sacrificing long-term considerations in the name of commercial product delivery, but as far as data can represent,
Brundage’s warning is not a lone one. This also comes just after key researchers such as Jan Leike and co-founder Ilya Sutskever have departed, adding to the growing gulf between AI development and safety. Early this year, Leike disbanded his safety team because it struggled to collect enough resources for crucial research, including a shortage of computing power. These internal frictions and external worries indicate a significant gap between the aspirations of AI labs and what the world’s global governance machines are ready to deal with regarding the potentially deleterious societal effects of AGI.
Of course, former OpenAI researcher Suchir Balaji has spoken out about how the company placed profit before safety and the rumors of multiple lawsuits against it accuse OpenAI of copyright infringement. Balaji’s departure and Brundage’s underscore growing unease among researchers who believe AI’s commercial potential outpaces potential efforts to mitigate its risks.
When I first heard about OpenAI LP, I was concerned, as OAI's non-profit status was part of what attracted me. But I've since come around about the idea for two reasons: 1. As I dug into the details, I realized many steps had been taken to ensure the org remains mission-centric.
— Miles Brundage (@Miles_Brundage) March 11, 2019
As OpenAI continues to lose key figures from its safety teams, the broader AI community faces an urgent question: Why are we racing toward AGI so quickly without realizing some of the potential ramifications of that race? Brundage’s departure and starker warnings over its capacity remind us that the risk with AGI isn’t just that we may lose the war. Still, it is even more so that we’re racing ahead of society’s capacity to control it. As with AI safety, the development of AGI will depend on the voice of those alternative key researchers who choose to pursue independent paths and may even make a case for a less aggressive and accountable procedure for AGI growth.
The world waits to see whether these frontier labs can be innovative while still bearing the responsibility to ensure that AGI works for society rather than against it when it comes. The spotlight is on OpenAI and others.
Image credit: Furkan Demirkaya/Ideogram