- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
This article is hilarious to me for some reason…
All 10 defendants were named John Doe because Microsoft doesn’t know their identity.
So Microsoft doesn’t know who the people are.
Microsoft didn’t say how the legitimate customer accounts were compromised but said hackers have been known to create tools to search code repositories for API keys developers inadvertently included in the apps they create. Microsoft and others have long counseled developers to remove credentials and other sensitive data from code they publish, but the practice is regularly ignored.
The accounts that were compromised were likely stolen because the account owners listed API creds directly in their code.
Microsoft didn’t outline precisely how the defendants’ software was allegedly designed to bypass the guardrails the company had created.
Microsoft won’t explain how their system is busted.
The lawsuit alleges the defendants’ service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference. The complaint seeks an injunction enjoining the defendants from engaging in “any activity herein.”
Whatever the hackers generated sure did piss Microsoft off.
to bypass the guardrails the company had created
What a delightful way to say that those guardrails were worth, in effect, fuck all.
It gets even better
These code-based restrictions have been repeatedly bypassed in recent years through hacks, some benign and performed by researchers and others by malicious threat actors.
Yet their public statement is
Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels.
Sounds like they preferred to keep it live and race to mitigate but the holes were still open.
But they’re really going at them, suing someone they can’t identify, and shouting off every violation they can hope to apply to it.Its irresponsible.
Read this article earlier, it wasn’t very clear to me what the focus was of this illicit gen AI content.
Very sneaky approach I have to say.