OpenAI has recently updated its usage policy, notably removing explicit prohibitions on the military applications of its technology. The previous bans on ‘weapons development’ and ‘military and warfare’ applications have been replaced with a broader injunction not to ‘use our service to harm yourself or others.’ OpenAI states that this change is part of a substantial rewrite aimed at enhancing the document’s clarity and readability.
While OpenAI spokesperson Niko Felix underscores the universal principle of avoiding harm, concerns have arisen regarding the vagueness of the new policy regarding military use. The revised language places emphasis on legality rather than safety, leading to questions about how OpenAI intends to enforce the updated policy.
Heidy Khlaaf, engineering director at Trail of Bits, points out that the shift from an explicit ban on military applications to a more flexible approach emphasizing compliance with the law may have implications for AI safety. Despite lacking direct lethal capabilities, the potential utilization of OpenAI’s tools in military contexts could contribute to imprecise and biased operations, potentially resulting in increased harm and civilian casualties.
These policy changes have sparked speculation about OpenAI’s willingness to engage with military entities. Critics argue that the company may be quietly weakening its stance against conducting business with the military, underscoring the need for clarity in OpenAI’s approach to enforcement.
Why does it matter?
Experts, among them Lucy Suchman and Sarah Myers West, highlight OpenAI’s close collaboration with Microsoft, a significant defense contractor, as a potential influencer of the company’s evolving policies. OpenAI’s partnership with Microsoft, which has invested $13 billion in the development of the language model (LLM), introduces a layer of complexity to the discourse, especially as armed forces globally are actively exploring the integration of machine learning techniques into their operations.
The recent adjustments to OpenAI’s policy coincide with a period in which various military entities, including the Pentagon, demonstrate interest in adopting large language models like ChatGPT for diverse applications, ranging from administrative tasks such as paperwork processing to more sophisticated data analysis. The alterations in language and the removal of explicit prohibitions give rise to inquiries about the potential ramifications of deploying OpenAI’s tools in military contexts and prompt considerations about the ethical implications associated with their utilization in defense sectors.