Microsoft has reiterated its prohibition on U.S. law enforcement agencies utilizing generative AI for facial recognition via Azure OpenAI Service, the company’s comprehensive enterprise-oriented interface for OpenAI technology.
Language introduced on Wednesday in the terms of service for Azure OpenAI Service explicitly prohibits integrations with Azure OpenAI Service from being employed “by or for” U.S. law enforcement agencies for facial recognition purposes, encompassing integrations with both current and potential future image-analyzing models developed by OpenAI.
A new addition to the terms specifies that the restriction extends to “any law enforcement globally” and explicitly prohibits the utilization of “real-time facial recognition technology” on mobile cameras such as body cameras and dashcams to attempt identification in uncontrolled environments.
These policy amendments follow closely after Axon, a manufacturer of technology and weapons for military and law enforcement, unveiled a new product utilizing OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Concerns were raised regarding potential issues like hallucinations (even state-of-the-art generative AI models may fabricate facts) and biases derived from training data (particularly troubling given the disproportionate rates at which people of color are stopped by police compared to their white counterparts).
It remains uncertain whether Axon employed GPT-4 via Azure OpenAI Service and whether the updated policy was prompted by Axon’s product launch. Previously, OpenAI had restricted the use of its models for facial recognition through its APIs. Inquiries have been made to Axon, Microsoft, and OpenAI, and this post will be updated accordingly.
The revised terms afford Microsoft some flexibility. The outright prohibition on Azure OpenAI Service usage applies solely to U.S. law enforcement, not international counterparts. Moreover, it does not encompass facial recognition conducted with stationary cameras in controlled environments, such as office settings (though any use of facial recognition by U.S. law enforcement is barred by the terms).
This aligns with Microsoft’s and its close partner OpenAI’s recent stance on AI-related engagements with law enforcement and defense entities.
Earlier this year, Bloomberg reported OpenAI’s collaboration with the Pentagon on various projects, including cybersecurity capabilities, marking a shift from the startup’s prior policy of refraining from supplying its AI to military entities. Additionally, Microsoft has proposed employing OpenAI’s image generation tool, DALL-E, to aid the Department of Defense (DoD) in software development for military operations, according to The Intercept.
Azure OpenAI Service was incorporated into Microsoft’s Azure Government product in February, offering enhanced compliance and management features tailored for government agencies, including law enforcement. Candice Ling, SVP of Microsoft’s government division, Microsoft Federal, pledged in a blog post that Azure OpenAI Service would undergo further evaluation for authorization by the DoD for supporting DoD missions.