ARSTECHNICA.COM
Microsoft sues service for creating illicit content with its AI platform
SEE YOU IN COURT Microsoft sues service for creating illicit content with its AI platform Service used undocumented APIs and other tricks to bypass safety guardrails. Dan Goodin Jan 10, 2025 6:10 pm | 7 Credit: Benj Edwards | Getty Images Credit: Benj Edwards | Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreMicrosoft is accusing three individuals of running a "hacking-as-a-service" scheme that was designed to allow the creation of harmful and illicit content using the companys platform for AI-generated content.The foreign-based defendants developed tools specifically designed to bypass safety guardrails Microsoft has erected to prevent the creation of harmful content through its generative AI services, said Steven Masada, the assistant general counsel for Microsofts Digital Crimes Unit. They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use.A sophisticated schemeMicrosoft is also suing seven individuals it says were customers of the service. All 10 defendants were named John Doe because Microsoft doesnt know their identity.By this action, Microsoft seeks to disrupt a sophisticated scheme carried out by cybercriminals who have developed tools specifically designed to bypass the safety guardrails of generative AI services provided by Microsoft and others, lawyers wrote in a complaint filed in federal court in the Eastern District of Virginia and unsealed Friday.The three people who ran the service allegedly compromised the accounts of legitimate Microsoft customers and sold access to the accounts through a now-shuttered site at rentry[.]org/de3u. The service, which ran from last July to September when Microsoft took action to shut it down, included detailed instructions on how to use these custom tools to generate harmful and illicit content.The service contained a proxy server that relayed traffic between its customers and the servers providing Microsofts AI services, the suit alleged. Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the companys Azure computers. The resulting requests were designed to mimic legitimate Azure OpenAPI Service API requests and used compromised API keys to authenticate them.Microsoft attorneys included the following images, the first illustrating the network infrastructure and the second displaying the user interface provided to users of the defendants' service: Screenshot Credit: Microsoft Microsoft didnt say how the legitimate customer accounts were compromised but said hackers have been known to create tools to search code repositories for API keys developers inadvertently included in the apps they create. Microsoft and others have long counseled developers to remove credentials and other sensitive data from code they publish, but the practice is regularly ignored. The company also raised the possibility that the credentials were stolen by people who gained unauthorized access to the networks where they were stored.Microsoft and others forbid using their generative AI systems to create various content. Content that is off limits includes materials that feature or promote sexual exploitation or abuse, is erotic or pornographic, or attacks, denigrates, or excludes people based on race, ethnicity, national origin, gender, gender identity, sexual orientation, religion, age, disability status, or similar traits. It also doesnt allow the creation of content containing threats, intimidation, promotion of physical harm, or other abusive behavior.Besides expressly banning such usage of its platform, Microsoft has also developed guardrails that inspect both prompts inputted by users and the resulting output for signs the content requested violates any of these terms. These code-based restrictions have been repeatedly bypassed in recent years through hacks, some benign and performed by researchers and others by malicious threat actors.Microsoft didnt outline precisely how the defendants' software was allegedly designed to bypass the guardrails the company had created.Masada wrote:Microsofts AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. As alleged in our court filings unsealed today, Microsoft has observed a foreign-based threatactor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.The lawsuit alleges the defendants service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference. The complaint seeks an injunction enjoining the defendants from engaging in any activity herein.Dan GoodinSenior Security EditorDan GoodinSenior Security Editor Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82. 7 Comments
0 Comments 0 Shares 36 Views