A group of hackers that says it believes “AI-generated artwork is detrimental to the creative industry and should be discouraged” is hacking people who are trying to use a popular interface for the AI image generation software Stable Diffusion with a malicious extension for the image generator interface shared on Github.

ComfyUI is an extremely popular graphical user interface for Stable Diffusion that’s shared freely on Github, making it easier for users to generate images and modify their image generation models. ComfyUI_LLMVISION, the extension that was compromised to hack users, is a ComfyUI extension that allowed users to integrate large language models GPT-4 and Claude 3 into the same interface.

The ComfyUI_LLMVISION Github page is currently down, but a Wayback Machine archive of it from June 9 states that it was “COMPROMISED BY NULLBULGE GROUP.”

  • A_Very_Big_Fan@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    9
    ·
    edit-2
    6 months ago

    Honestly I still don’t understand the “stealing” argument. Does the stealing occur during training? From everything I’ve learned about the technology, the training, in terms of the data given and the end result, isn’t any different than me scrolling through Google images to get a concept of how to draw something. It’s not like they have a copy of the whole Internet on their servers to make it work.

    Does it occur during the image generation? Because try as I might, I’ve never been able to get it to output copyrighted material. I know over fitting used to be an issue, but we figured out how to solve that issue a long time ago. “But the signatures!!” yeah, it’s never outputted a recognizable/legible signature, it just associates signatures with art.

    Shouldn’t art theft be judged like any other copyright matter? It doesn’t matter how it was created, it matters if it violates fair use. I really don’t think training crosses that line, and I’ve yet to see these models output a copy of another image outside of image-to-image models.