In his first appearance before Congress, Altman appeared before the Senate Judiciary Subcommittee on Tuesday. He stressed the importance of lawmakers enacting safety guidelines and rules for AI to “mitigate the risks of increasingly potent models.”
“We realize that many are concerned about how it could alter how we live. And we are,” Altman added. This technology “can go quite wrong if it goes wrong“.
In the course of the nearly three-hour hearing, Altman and two other witnesses—Professor Emeritus Gary Marcus and Christina Montgomery, chief privacy and trust officer at IBM—discussed the potential risks associated with unchecked AI with close to 60 lawmakers, ranging from job disruption to intellectual property theft.
My worst concern is that we hurt the globe, he remarked.
CEO calls for “do no harm” approach in AI development
Altman urged that politicians adopt a licensing scheme for businesses creating potent AI systems as one action. To issue corporations a license, legislators would set down a set of safety requirements, and if those requirements are not met, they would have the authority to withdraw the license.
Regarding the future concern of how AI would alter the employment landscape, Altman concurred that many jobs might be eliminated by technology. He does not believe, nevertheless, that this precludes the creation of additional jobs.
He asserted that “I think [AI] can completely automate away some jobs.” And new ones will be created, which we think will be far better.
Tech billionaires like Elon Musk asked for a six-month break from AI in an open letter in March. On Tuesday, Altman began by stating that the “frame of the letter is wrong” and that what matters are audits and safety criteria that need to pass before teaching the technology. Sen. Josh Hawley had asked the witnesses about the letter. “If we pause for six months, I’m not sure what we do then, do we pause for another six?” he said.
The government’s AI actions remain uncertain, and “hard decisions” ahead
Altman said that the business took more than six months to disclose GPT4 to the general public before OpenAI deployed it and that the company “wants to go in” the direction of the standards that OpenAI built and employed before deploying technology rather than “a calendar clock pause.”
Sen. Richard Blumenthal, the chair of the panel, added his voice to the discussion and stated that enacting a moratorium and “sticking our head in the sand” are not workable solutions. He said, “Safeguards and precautions, yeah, but a flat stop sign? The world won’t wait. That would make me very concerned.
In his final remarks, Blumenthal stressed that while “hard decisions” will need to be taken, for now, businesses creating AI should adopt a “do no harm” stance. It is unclear what steps, if any, the government will take regarding AI.
Do you are wondering about ChatGPT’s net worth and how OpenAI makes money?