Microsoft accuses group of developing tool to abuse its AI service in new lawsuit

Spread the love

Microsoft has taken legal action against a group that the company claims deliberately created and used tools to bypass security guards in its cloud AI products.

According to A complaint filed by the company In December in the U.S. District Court for the Eastern District of Virginia, a group of 10 unnamed defendants used stolen customer credentials and custom-designed software. Azure OpenAI servicePowered by Microsoft’s fully managed services chatgpt Creator OpenAI’s technology.

In the complaint, Microsoft accuses the defendants — whom it calls only “Will,” a legal pseudonym — of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a federal racketeering statute by illegally accessing and using Microsoft’s software. and servers intended to generate “offensive” and “harmful and illegal content”. Microsoft did not provide specific details about the offending content it generated.

The company is seeking injunctive and “other equitable” relief and damages.

In the complaint, Microsoft said it discovered in July 2024 that customers with Azure OpenAI service credentials — specifically API keys, unique strings of characters used to authenticate an app or user — were being used to create content that violated the service’s Acceptable Use Policy. Subsequently, through an investigation, Microsoft discovered that API keys had been stolen from paying customers, according to the complaint.

“The precise manner in which the defendants obtained all of the API keys to carry out the misconduct described in this complaint is unknown,” Microsoft’s complaint states, “but it appears that the defendants engaged in a pattern of systematic API key theft that enabled them. Microsoft API keys from multiple Microsoft customers.” to steal

Microsoft alleged that the defendants used stolen Azure OpenAI service API keys for US-based customers to create a “hacking-as-a-service” scheme. According to the complaint, to pull off the scheme, the defendants created a client-side tool called de3u, as well as software for processing and routing communications from de3u to Microsoft’s systems.

De3u allows users to facilitate image creation using stolen API keys Dal-eThe Azure OpenAI service is one of the OpenAI models available to customers without having to write their own code, Microsoft claims. De3u also tried to prevent the Azure OpenAI service from modifying the prompts used to generate images, according to the complaint, which can happen, for example, when a text prompt contains words that trigger Microsoft’s content filtering.

De3u is the Microsoft case
A screenshot of the De3u tool from Microsoft complaints.Image credit:Microsoft

A repo containing the de3u project code hosted on GitHub — a company owned by Microsoft — was no longer accessible at press time.

“These features, combined with defendants’ unlawful programmatic API access to the Azure OpenAI service, enabled defendants to reverse engineer Microsoft’s content and anti-misuse measures,” the complaint reads. “Defendants knowingly and intentionally accessed computers protected by the Azure OpenAl service without authorization, and such conduct caused loss and damage.”

A Blog post Published Friday, Microsoft said the court authorized it to seize a website “instrumental” to the defendants’ operations that would allow the company to gather evidence, understand how the defendants’ alleged services are monetized and disrupt any additional technological infrastructure it finds. .

Microsoft also said it has “deployed countermeasures,” which the company did not specify, and “added additional security mitigations” to the Azure OpenAI service that target targeted activity.

Leave a Reply

Your email address will not be published. Required fields are marked *