These code-based restrictions have been repeatedly bypassed in recent years through hacks, some benign and performed by researchers and others by malicious threat actors.
Yet their public statement is
Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels.
Sounds like they preferred to keep it live and race to mitigate but the holes were still open.
But they’re really going at them, suing someone they can’t identify, and shouting off every violation they can hope to apply to it.
It gets even better
Yet their public statement is
Sounds like they preferred to keep it live and race to mitigate but the holes were still open.
But they’re really going at them, suing someone they can’t identify, and shouting off every violation they can hope to apply to it.
Its irresponsible.