In a surprising move, OpenAI quietly revised its usage policy, eliminating explicit language that barred the use of its technology for military purposes. The alteration, made on January 10, 2024, removes the previous ban on “weapons development” and “military and warfare” from the policy. This substantial change has sparked concerns about the potential implications of OpenAI’s powerful tool, ChatGPT, being employed in military applications.
The policy redaction
The modification, part of a broader policy rewrite aimed at clarity and readability, has garnered attention for its impact on the permissible use of OpenAI’s technology. The new policy retains a general injunction against using the service to harm oneself or others, with examples such as developing or using weapons. However, the specific ban on military and warfare applications has disappeared.
OpenAI’s perspective
OpenAI spokesperson Niko Felix explained that the goal was to establish universal principles that are easy to remember and apply globally. Emphasizing principles like “Don’t harm others,” the company believes these guidelines are broad yet easily understood. Felix refrained from explicitly stating whether the vague “harm” ban includes all military uses but emphasized the prohibition on technology use for developing, using weapons, injuring others, or engaging in unauthorized activities violating security.
Concerns from experts
Heidy Khlaaf, an engineering director at Trail of Bits, expressed concern that the new policy seems to prioritize legality over safety. She highlighted the potential risks associated with using AI in military applications, citing instances of bias and hallucinations within Large Language Models (LLMs). The concern is that imprecise and biased operations could lead to increased harm and civilian casualties.
Enforcement and real-world consequences:
The real-world consequences of OpenAI’s policy shift remain unclear. OpenAI had previously been non-committal about enforcing its ban on military and warfare applications. Despite ChatGPT’s current inability to directly cause harm, experts argue that its deployment in military contexts, even non-violent ones, may indirectly contribute to an institution focused on lethality.
Possible motivations and partnerships
Some experts suggest that OpenAI may be silently relaxing its stance against military involvement. The shift from explicitly banning “military and warfare” to focusing on “weapons” might create space for supporting operational infrastructures without directly engaging in weapons development. OpenAI’s close partnership with Microsoft, a major defense contractor, raises questions about the influence of such relationships on the company’s policies.
Military interest in AI
The policy changes come at a time when militaries globally are eager to incorporate machine learning techniques. The U.S. Pentagon is tentatively exploring the use of ChatGPT and other large-language models to gain a strategic advantage. While acknowledging concerns about accuracy and security risks, the military remains enthusiastic about adopting artificial intelligence tools.
OpenAI’s revision of its usage policy, particularly the removal of explicit bans on military applications, raises important questions about the ethical use of advanced language models. As the intersection of AI and military interests continues to evolve, it becomes crucial to strike a balance between technological innovation and ethical considerations to prevent potential harm and misuse. The implications of these policy changes will undoubtedly fuel ongoing discussions about the responsible deployment of powerful AI technologies in various domains.
Featured image credit: Specna Arms/Unsplash