Microsoft accuses group of developing tool to abuse its AI service in new lawsuit

Microsoft accuses group of developing tool to abuse its AI service in new lawsuit

##Microsoft Counters Cloud AI Safety Bypass

Microsoft is legally challenging an anonymous group believed to have developed tools to bypass its cloud AI security. According to Microsoft’s December complaint, ten unidentified individuals allegedly used stolen customer access credentials and specially designed software to infiltrate the Azure OpenAI Service, a tool governed by OpenAI’s technologies.

##Accusations and Complaint Details

The defendants, referred to as “Does,” are accused of offences under the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a federal racketeering law. The illicit activities include the unauthorized access and use of Microsoft’s software to generate offensive and illicit content. Microsoft chose not to disclose specifics of the inappropriate content produced.

##Cloud Breach Discovery and Ensuing Actions

Microsoft identified an unacceptable usage of Azure OpenAI Service API keys—unique characters authenticating an app or user—in July 2024. The company discovered that the stolen API keys were being abused by hackers who had allegedly set up a “hacking-as-a-service” scheme. According to the complaint, the defendants used a client-side tool called de3u to facilitate this process.

##Aftermath and Steps for Safety Improvement

Following the court’s authorization, Microsoft seized a website argued to be critical to the defendant’s operations. This will aid in gathering evidential material and neutralizing additional technical infrastructures. The tech major claims to have implemented undisclosed “countermeasures” and “added additional safety mitigations” to protect the Azure OpenAI Service.

Fonte original: Leia a matéria completa no TechCrunch