A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

  • @[email protected]
    link
    fedilink
    -3823 days ago

    How was the model trained? Probably using existing CSAM images. Those children are victims. Making derivative images of “imaginary” children doesn’t negate its exploitation of children all the way down.

    So no, you are making false equivalence with your video game metaphors.

    • @fernlike3923
      link
      5923 days ago

      A generative AI model doesn’t require the exact thing it creates in its datasets. It most likely just combined regular nudity with a picture of a child.

      • finley
        link
        fedilink
        English
        -1323 days ago

        In that case, the images of children were still used without their permission to create the child porn in question

        • @[email protected]
          link
          fedilink
          2923 days ago

          That’s not really a nuanced take on what is going on. A bunch of images of children are studied so that the AI can learn how to draw children in general. The more children in the dataset, the less any one of them influences or resembles the output.

          Ironically, you might have to train an AI specifically on CSAM in order for it to identify the kinds of images it should not produce.

          • finley
            link
            fedilink
            English
            -722 days ago

            “Nuh uh!” Is a pretty weak argument

        • @fernlike3923
          link
          523 days ago

          That’s a whole other thing than the AI model being trained on CSAM. I’m currently neutral on this topic so I’d recommend you replying to the main thread.

            • @fernlike3923
              link
              14
              edit-2
              23 days ago

              It’s not CSAM in the training dataset, it’s just pictures of children/people that are already publicly available. This goes on to the copyright side of things of AI instead of illegal training material.

              • finley
                link
                fedilink
                English
                -25
                edit-2
                23 days ago

                It’s images of children used to make CSAM. No matter of your mental gymnastics can change that, nor the fact that those children’s consent was not obtained.

                Why are you trying so hard to rationalize the creation of CSAM? Do you actually believe there is a context in which CSAM is OK? Are you that sick and perverted?

                Because it really sounds like that’s what you’re trying to say, using copyright law as an excuse.

        • @[email protected]
          link
          fedilink
          English
          223 days ago

          Good luck convincing the AI advocates of this. They have already decided that all imagery everywhere is theirs to use however they like.

    • macniel
      link
      fedilink
      2723 days ago

      Can you or anyone verify that the model was trained on CSAM?

      Besides a LLM doesn’t need to have explicit content to derive from to create a naked child.

      • @[email protected]
        link
        fedilink
        -2423 days ago

        You’re defending the generation of CSAM pretty hard here in some vaguely “but no child we know of” being involved as a defense.

        • macniel
          link
          fedilink
          1323 days ago

          I just hope that the Models aren’t trained on CSAM. Making generating stuff they can fap on ““ethical reasonable”” as no children would be involved. And I hope that those who have those tendancies can be helped one way or another that doesn’t involve chemical castration or incarceration.

    • Diplomjodler
      link
      fedilink
      1223 days ago

      While i wouldn’t put it past Meta&Co. to explicitly seek out CSAM to train their models on, I don’t think that is how this stuff works.

    • @[email protected]
      link
      fedilink
      English
      -523 days ago

      But the AI companies insist the outputs of these models aren’t derivative works in any other circumstances!