I don’t make art myself, the closest I come is software development, which is already heavily scraped and used for training AI models. So, I agree that I might not fully understand, especially since my field tends to embrace assistive tools.
That said, I think the idea that AI-generated art reflects poorly on the original artist is a bit of a misnomer/self inflicted. When someone looks at an AI-generated piece, they’re not going to think, “Oh, that was by Liyunxiao,” because the end product isn’t a direct copy of any specific work. The models don’t store or reproduce the original source data, they learn patterns based off the source material, and then reapply them using what they have learned, often with a lot of randomization(as shown by it’s sometimes blatant inability to show realistic looking outputs)
While I believe we agree with the statement that work should have the artists permission before usage in a training model, or at the very least be paid for their usage instead of it just being scraped, I think both are comparable. One makes a new piece of art using what its “learned” off traits the training set had, one copies an existing piece of art. Neither prevent anyone from using the original source(artist or game studio), and they both are done usually against the wishes of the original team.
Being said, the example provided I think works better when compared to piracy, as at least at that point it’s a 1:1 clone instead of a creative works. As a art piece by a holocaust survivor being thrown into a training set on a diffusion model, wouldn’t come out the same image on the other end. Only a generalization and styleset is saved. At the end of the day, nobody has the ability to know where the diffusion art’s original sources came from nor is it able to produce a picture that is recognizable to an artists style, whereas with piracy you have a piece of work you can look up to see who owned it.
I don’t make art myself, the closest I come is software development, which is already heavily scraped and used for training AI models. So, I agree that I might not fully understand, especially since my field tends to embrace assistive tools.
That said, I think the idea that AI-generated art reflects poorly on the original artist is a bit of a misnomer/self inflicted. When someone looks at an AI-generated piece, they’re not going to think, “Oh, that was by Liyunxiao,” because the end product isn’t a direct copy of any specific work. The models don’t store or reproduce the original source data, they learn patterns based off the source material, and then reapply them using what they have learned, often with a lot of randomization(as shown by it’s sometimes blatant inability to show realistic looking outputs)
While I believe we agree with the statement that work should have the artists permission before usage in a training model, or at the very least be paid for their usage instead of it just being scraped, I think both are comparable. One makes a new piece of art using what its “learned” off traits the training set had, one copies an existing piece of art. Neither prevent anyone from using the original source(artist or game studio), and they both are done usually against the wishes of the original team.
Being said, the example provided I think works better when compared to piracy, as at least at that point it’s a 1:1 clone instead of a creative works. As a art piece by a holocaust survivor being thrown into a training set on a diffusion model, wouldn’t come out the same image on the other end. Only a generalization and styleset is saved. At the end of the day, nobody has the ability to know where the diffusion art’s original sources came from nor is it able to produce a picture that is recognizable to an artists style, whereas with piracy you have a piece of work you can look up to see who owned it.
That’s just my opinion on it all though.