Microsoft levels allegations against a collective for creating an instrument to exploit its AI service in recent legal action

Microsoft recently initiated legal proceedings against a group that it alleges deliberately created and utilized tools to circumvent the safety measures of its cloud AI services.

In a lawsuit filed in the U.S. District Court for the Eastern District of Virginia in December, the tech giant accused ten unnamed defendants of breaking into the Azure OpenAI Service using stolen customer credentials and purpose-built software. The Azure OpenAI Service is a fully managed service backed by technologies developed by OpenAI, the creator of ChatGPT.

These defendants, referred to legally as “Does,” are accused by Microsoft of breaching the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a federal racketeering law. The tech giant alleges that they illegally accessed and used Microsoft’s software and servers with the intention of generating offensive, harmful, and illicit content, though no specific details were provided about the nature of the content.

Microsoft is seeking both injunctive and “other equitable” relief and damages. The company claims to have found in July 2024 that Azure OpenAI Service credentials, specifically API keys used for app or user authentication, were being used to create content that contravenes the service’s approved use policy. Microsoft’s investigation revealed that these API keys had been stolen from paying customers.

Microsoft’s lawsuit doesn’t give specifics on how the defendants managed to obtain all the API keys used in their alleged misconduct, but it does suggest a systematic pattern of API key theft that allowed them to steal multiple Microsoft customer API keys.

The tech company also alleges that the defendants used these stolen API keys belonging to U.S.-based customers to execute a “hacking-as-a-service” scheme. They purportedly created a client-side tool called de3u and additional software to route communications from de3u to Microsoft’s systems.

According to Microsoft, de3u enabled users to use the stolen API keys to generate images with one of OpenAI’s models, DALL-E, without needing to write their own code. The tool also allegedly tried to stop the Azure OpenAI Service from revising the prompts used to generate images, which typically happens when a text prompt contains words that trigger Microsoft’s content filtering.

Microsoft has since seized a website key to the defendants’ operations, following court approval. This action will allow the company to gather evidence, understand how the alleged services are monetized, and disrupt any further technical infrastructure it uncovers. Microsoft also claims to have implemented countermeasures and added extra safety mitigations to the Azure OpenAI Service to address the activity it observed.

Comments are closed.