I’ve recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.

  • Xeroxchasechase@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    7
    ·
    edit-2
    4 months ago

    As long as it’s licensed as Creative Common of some sort. Copyrighted materials are copyrighted and shouldn’t be used without concent , this protect also individuals not only corporations. (Excuse my English)

    Edit: Your argument about probability and parameter size is inapplicable in my mind. The same can be said about jpeg lossy compression.

    • Zagorath@aussie.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      Creative Commons would not actually help here. Even the most permissive licence, CC-BY, requires attribution. If using material for training material requires a copyright licence (which is certainly not a settled question of law), CC would likely be just the same as all rights reserved.

      (There’s also CC-0, but that’s basically public domain, or as near to it as an artist is legally allowed to do in their locale. So it’s basically not a Creative Commons licence.)

    • wildncrazyguy138@fedia.io
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      4 months ago

      Could the copywrited material consumed potentially fall under fair use? There are provisions for research purposes.

    • 31337OP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      4 months ago

      Incidentally, I read this a while ago, because I was training a classifier on mostly Creative Commons licensed works: https://creativecommons.org/2023/08/18/understanding-cc-licenses-and-generative-ai/

      … we believe there are strong arguments that, in most cases, using copyrighted works to train generative AI models would be fair use in the United States, and such training can be protected by the text and data mining exception in the EU. However, whether these limitations apply may depend on the particular use case.

      • Xeroxchasechase@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        Maybe there should be a distinction if an individual does is for educational and research and a corporation does it for commercial use. As a user it’s fun and usefull to generate whatever mix of text or images I want from a model that was trained on everything, but a user doesn’t see the exploitation made by the corporation that handed him the tool