
Microsoft Is Suing People Who Did Bad Things With Its AI
futurism.com
Microsoft just modified a lawsuit to name four multinational developers who allegedly bypassed safety guardrails and abused Microsoft's AI tools to generate deepfaked celebrity porn and other harmful content.The tech giant announced the update in a blog post yesterday, saying that all four developers are members of Storm-2139, a cybercrime network. Being alleged cybercriminals, the named defendants go by nicknames that sound straight out of an early-2000s hacker flick: there's Arian Yadegarnia aka "Fiz" of Iran; Alan Krysiak aka "Drago" of the United Kingdom; Ricky Yuen aka "cg-dot" of Hong Kong; and Pht Phng Tn aka "Asakuri" of Vietnam.In the post, Microsoft breaks the individuals making up Storm-2139 into three tiers: "creators, providers, and users," who together comprise a dark marketplace hinging on the jailbreaking and modification of Microsoft's AI tools to create unlawful or destructive material."Creators developed the illicit tools that enabled the abuse of AI-generated services," reads the post, adding that the "providers then modified and supplied these tools to end users often with varying tiers of service and payment.""Finally," it continues, "users then used these tools to generate violating synthetic content, often centered around celebrities and sexual imagery."The civil suit was initially filed in December, albeit with all specific defendants listed simply as "John Doe." Now, though, in light of new evidence revealed in Microsoft's investigation into Storm-2139, it's choosing to unmask some of the alleged bad actors embroiled in litigation others are still unnamed per ongoing investigations, according to the tech giant, though it says that at least two are American citing future deterrence as motivation for doing so."We are pursuing this legal action now against identified defendants," Microsoft declared in the post, "to stop their conduct, to continue to dismantle their illicit operation, and to deter others intent on weaponizing our AI technology."It's a fascinating show of force by the behemoth that is Microsoft, which understandably doesn't want bad actors abusing its generative AI tools to create obviously terrible content, like nonconsensual fake porn of real people. After all, as far as deterrents go, finding yourself in the legal crosshairs of one of the world's wealthiest and most powerful organizations is pretty high up there.To that end, according to Microsoft, the legal pressure has already worked to divideStorm-2139. According to Microsoft, the "seizure" of the group's website and "subsequent unsealing of the legal filings in January generated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at one another."That said, as Gizmodo notes, the decision by Microsoft to throw its heavy legal weight against alleged abusers of its tech also lands in a bit of a gray area in the ongoing debate over AI safety and how companies should seek to limit AI misuse.Some companies, like Meta, have chosen to make their frontier AI models open-source a more decentralized approach to AI development, though one that some experts argue could allow bad actors to quietly harness advanced AI technology out of view or oversight from the public (the AI industry currently pretty much regulates itself, so the concept of "oversight" in the AI industry should generally be taken with a grain of salt, though companies like Meta, Microsoft, and Google do still have to answer to the court of public opinion.)Microsoft, for its part, has embraced more of a mixed approach, building some models in public and keeping other models closed off from public view. Regardless of the tech giant's vast resources and stated commitments to safe and responsible AI, though, criminals have still allegedly found ways to crack through its guardrails and profit from ill use. And as Microsoft, like others, continues down its all-in-on-AI road, it can't exactly count on litigation alone to quell harmful exploitation of its AI tools especially in such a deregulated environment, where the law itself is still catching up to the complexities of AI harm and abuse."While Microsoft and others have established systems designed to prevent misuse of generative AI," writes Axios' Ina Fried, "those protections only work when the technological and legal systems can effectively enforce them."More on AI and harm: Man's Entire Life Destroyed After Downloading AI SoftwareShare This Article
0 Comments
·0 Shares
·38 Views