• @mindbleach
    link
    16 months ago

    LLMs reproducing source text is a failure called overtraining and it’s something those companies want to avoid regardless.

    Image AI knowing what popular characters look like is a non-issue.

    What are we even talking about, there? These aren’t exact copies of existing images. You expect the draw-anything robot to be incapable of drawing Superman? Or a cartoon mouse? Even when you explicitly describe the color of a costume and the letter that goes on a hat?

    Some guy on Twitter asks, ‘how can liability be pushed to the user?,’ as if liability exists when you just draw a thing. If someone goes straight from Bing image generator (or Google image search) to the logo for their new company, yeah no shit that’s a legal issue, and that person is being an idiot. But you can draw Spongebob pornography. Like, right now. Pick up a pen and go. You can even take suggestions, or ask someone else to do it for you. It’s not about to confuse customers or even involve customers. That act alone is not what copyright and trademark laws cover. And if they did - the moral answer is to walk them way the hell back.

    Generative AI systems like DALL-E and ChatGPT have been trained on copyrighted materials

    No shit. But so were you.

    Generative AI systems are fully capable of producing materials that infringe on copyright

    Literally no-one has ever promised otherwise. ChatGPT’s earliest examples namedropped Tolkien characters.

    The slapdash filters are just another extra-legal effort to fend off copyright cartels’ flesh-eating lawyers. Those bastards get mad that paper can contain scribbles they maybe sorta kinda theoretically might make one penny from, and one more company goes ‘fine, here, fuck off.’