Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.

Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they’ve learned from two separate buckets of online images — adult pornography and benign photos of kids.

But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement.

  • @Ookami38
    link
    57 months ago

    Same thing I’ve said all along, shits fucked but it’s the people, not the tool, that’s the problem. Turns out it’s the people training the AI with shit like this that’s the problem, not the AI itself.

    • @[email protected]
      link
      fedilink
      -27 months ago

      If people are using it for these purposes, then these people shouldn’t be allowed to use it.

      • AnonTwo
        link
        fedilink
        2
        edit-2
        7 months ago

        Can’t you technically already do this with more primitive tools? like…just draw it? Or use photoshop filters?

        This just greatly expands the number of people capable of doing it.

        Plus i’m pretty sure the laws in place already would get someone prosecuted for the creation of the art, and said person would probably use the tools even if they had to use illegal means.

        Basically i’m not sure this line of reasoning really does anything but hurt benign or even legitimate uses of AI.

        • @[email protected]
          link
          fedilink
          -17 months ago

          This is happening in part because the creators of these AI systems don’t verify their training data.

          It’s inexcusable to include this content and then claim bad actors.

          Doing the same thing with other methods is also not allowed in many countries. Along with the distribution of such material. Just because an AI does it does not justify it.

          Any person or business creating or using such material shouldn’t be allowed unsupervised access to distribution methods. This is the case for older methods. AI shouldn’t be a Scape goat. It just provides plausible deniability.

          Plausible deniability shouldn’t be an excuse. Especially in cases where businesses are doing this. They should be responsible for the content they feed into training AI. It’s completely inexcusable. Only dumb tech bros that don’t understand tech and pedos could seriously think this should be allowed.

          • AnonTwo
            link
            fedilink
            07 months ago

            What exactly are you basing the dumb tech bros thing on? Is there even a single training set that has some sort of verification yet? If they did we wouldn’t have all the DMCA issues that AI is also going through would we? It seems like it’s generally argued that’s not actually easy to do at the moment.

            Like you’re arguing a lot of absolutes here that don’t seem to be backed up by anything???

      • @Ookami38
        link
        17 months ago

        I mean, I don’t think I disagree with that, necessarily. That’s what my stance the whole time is, blame the user not the tool.