There’s photorealistic images of Pokemon. If you think the model could not generate that without access to real photos of that exact thing, I invite you to show me those photos.
And y’all keep fixating on these mere hundreds of images contaminating a database of six billion. Images that weren’t even labeled properly - or else they would have been trivially excluded. Labels are why text generation works. Models trained without such images will still be able to generate this content. If they have photos of children, and photos of nudity, they can combine those as readily as Shrek plus Darth Vader plus unicycle. It doesn’t need exact matching input of an event featuring those three things. Even if it had that exact image, if it was labeled “goblin fights samurai on tricycle,” then it would not reference it.
Stop denying the truth
https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
Those aren’t the same thing, god dammit.
There’s photorealistic images of Pokemon. If you think the model could not generate that without access to real photos of that exact thing, I invite you to show me those photos.
And y’all keep fixating on these mere hundreds of images contaminating a database of six billion. Images that weren’t even labeled properly - or else they would have been trivially excluded. Labels are why text generation works. Models trained without such images will still be able to generate this content. If they have photos of children, and photos of nudity, they can combine those as readily as Shrek plus Darth Vader plus unicycle. It doesn’t need exact matching input of an event featuring those three things. Even if it had that exact image, if it was labeled “goblin fights samurai on tricycle,” then it would not reference it.