• bassomitron@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    4
    ·
    edit-2
    1 year ago

    On one hand I get their point, but on the other hand I view it as kind of pointless. Like if I go on DeviantArt forums or some other publicly viewable art repository, look at everyone’s art content, and then decide to emulate their styles, that’s perfectly legal and I’m not exactly breaking any ethical codes by doing so. So how is it really that much different for an AI to do the same?

    Regardless, I won’t be surprised when developers code in a countermeasure to defend their models against this AI malware and yet another digital arms race will be born.

    • fishos@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      1 year ago

      Exactly this. If a person can learn from others art to improve their own art, then why can’t a person also use others art to improve their code? Plus, I thought art was about sharing and expressing emotions, thoughts, and ideas. It’s whole point is to be experienced and then have that experience influence other thoughts and ideas within you.

      I get artists deserve to be paid, but this just feels too capitalist and greedy. Tho on the other hand, a lot of these AI models are paid subscriptions, so I can see the argument that their art is benefiting some capitalist on top unfairly. I’m against the concept of restricting art usage, but I guess I get why I this case they’re asking.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    This is the best summary I could come up with:


    The goal is to help visual artists and publishers protect their work from being used to train generative AI image synthesis models, such as Midjourney, DALL-E 3, and Stable Diffusion.

    The open source “poison pill” tool (as the University of Chicago’s press department calls it) alters images in ways invisible to the human eye that can corrupt an AI model’s training process.

    Those with access to existing large image databases (such as Getty and Shutterstock) are at an advantage when using licensed training data.

    But as the Nightshade team sees it, research use and commercial use are two entirely different things, and they hope their technology can force AI training companies to license image data sets, respect crawler restrictions, and conform to opt-out requests.

    “The point of this tool is to balance the playing field between model trainers and content creators,” co-author and University of Chicago professor Ben Y. Zhao said in a statement.

    Shawn Shan, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Zhao developed Nightshade as part of the Department of Computer Science at the University of Chicago.


    The original article contains 656 words, the summary contains 179 words. Saved 73%. I’m a bot and I’m open source!

  • Fleur__@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I feel like this ai art thing is gonna be like the self checkouts taking people’s job equivalent in 5-10 years

    How artists get compensated is a strange issue because shouldn’t people who choose to make art not have to worry about if their art can pay their bill.

    The issue is capitalism God damnit!