• ∟⊔⊤∦∣≶@lemmy.nz
    link
    fedilink
    English
    arrow-up
    74
    arrow-down
    4
    ·
    1 year ago

    Model is the right term instead of database.

    We learned something about how LLMs work with this… its like a bunch of paintings were chopped up into pixels to use to make other paintings. No one knew it was possible to break the model and have it spit out the pixels of a single painting in order.

    I wonder if diffusion models have some other wierd querks we have yet to discover

    • Jamie@jamie.moe
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      1
      ·
      1 year ago

      I’m not an expert, but I would say that it is going to be less likely for a diffusion model to spit out training data in a completely intact way. The way that LLMs versus diffusion models work are very different.

      LLMs work by predicting the next statistically likely token, they take all of the previous text, then predict what the next token will be based on that. So, if you can trick it into a state where the next subsequent tokens are something verbatim from training data, then that’s what you get.

      Diffusion models work by taking a randomly generated latent, combining it with the CLIP interpretation of the user’s prompt, then trying to turn the randomly generated information into a new latent which the VAE will then decode into something a human can see, because the latents the model is dealing with are meaningless numbers to humans.

      In other words, there’s a lot more randomness to deal with in a diffusion model. You could probably get a specific source image back if you specially crafted a latent and a prompt, which one guy did do by basically running img2img on a specific image that was in the training set and giving it a prompt to spit the same image out again. But that required having the original image in the first place, so it’s not really a weakness in the same way this was for GPT.

      • TWeaK@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        But the fact is the LLM was able to spit out the training data. This means that anything in the training data isn’t just copied into the training dataset, allegedly under fair use as research, but also copied into the LLM as part of an active commercial product. Sure, the LLM might break it down and store the components separately, but if an LLM can reassemble it and spit out the original copyrighted work then how is that different from how a photocopier breaks down the image scanned from a piece of paper then reassembles it into instructions for its printer?

        • lad@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          It’s not copied as is, thing is a bit more complicated as was already pointed out

          • TWeaK@lemm.ee
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            But the thing is the law has already established this with people and their memories. You might genuinely not realise you’re plagiarising, but what matters is the similarity of the work produced.

            ChatGPT has copied the data into its training database, then trained off that database, then it runs “independently” of that database - which is how they vaguely argue fair use under the research exemption.

            However if ChatGPT can “remember” its training data and recompile significant portions of it in certain circumstances, then it must be guilty of plagiarism and copyright infringement.

        • Jamie@jamie.moe
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Speaking for LLMs, given that they operate on a next-token basis, there will be some statistical likelihood of spitting out original training data that can’t be avoided. The normal counter-argument being that in theory, the odds of a particular piece of training data coming back out intact for more than a handful of words should be extremely low.

          Of course, in this case, Google’s researchers took advantage of the repeat discouragement mechanism to make that unlikelihood occur reliably, showing that there are indeed flaws to make it happen.

          • TWeaK@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            If a person studies a text then writes an article about the same subject as that text while using the same wording and discussing the same points, then it’s plagiarism whether or not they made an exact copy. Surely it should also be the case with LLM’s, which train on the data then inadvertently replicate the data again? The law has already established that it doesn’t matter what the process is for making the new work, what matters is how close it is to the original work.

    • SkySyrup
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      1 year ago

      The technology of compression a diffusion model would have to achieve to realistically (not too lossily) store “the training data” would be more valuable than the entirety of the machine learning field right now.

      They do not “compress” images.