Sam Altman, CEO of OpenAI, proposes the establishment of an international oversight agency for the control of artificial intelligence (AI) technologies. Given the rapid evolution of AI, Altman says, an agency approach would be better than inflexible laws.
Likening AI to airplanes, Altman emphasizes the need for a safety testing framework. He points out that AI systems can have negative effects that transcend the borders of a country.
He also emphasizes that such powerful systems should be inspected by an international agency and subjected to reasonable security tests.
Sam Altman calls for global oversight to ensure safe AI development
Sam Altman explains the need to regulate AI by comparing it to airplane safety. He says that a safety testing framework for preventing human casualties, as in airplanes, is essential for AI. Altman says that when he gets on an airplane, he assumes that it will be safe, and the same trust should be provided for AI.
Sam Altman argues that an international agency, rather than national laws, can offer more flexible solutions to regulate AI. He emphasizes that this kind of regulation is important for a fast-changing technology like AI. Altman argues that current laws cannot properly regulate AI and could become invalid within a few months.
The European Union approved the Artificial Intelligence Act in March this year, which aims to categorize AI risks and ban unacceptable use cases. At the same time, US President Joe Biden signed an executive order demanding greater transparency from the world’s largest AI models. The state of California also continues to lead the way in regulating AI this year, with the legislature considering more than 30 bills, according to Bloomberg.
Altman states that the international regulation of AI technologies should be flexible enough to adapt to the developments in this field and establish a global security standard. Such international cooperation could be critical to minimizing the potential global risks of AI.
Keeping AI in check is critical to both the sustainability of technological innovation and societal security. Without proper regulation, AI systems can behave in unexpected ways, causing unforeseen harm. In particular, powerful AI applications, such as autonomous weapons, could pose serious threats to international security if left unchecked.
On the other hand, over-regulation of AI could stifle innovation and slow down technological development. Overly strict laws may prevent companies and researchers from developing new AI applications, which could result in falling behind in global economic and technological competition. Therefore, AI regulations need to take a balanced approach between technological innovation and the protection of societal and individual rights.
As Sam Altman has suggested, an international agency could provide standardized global oversight of AI technologies, both driving innovation and minimizing potential risks. By encouraging international cooperation and information exchange, this approach would maximize the positive potential of AI and minimize risks. In other words, regulating AI technologies at the right rate and in an effective manner is critical to both fostering new developments and preventing societal harm.
Featured image credit: Emiliano Vittoriosi / Unsplash