Late last year, California passed a law against the possession or distribution of child sex abuse material (CSAM) that has been generated by AI. The law went into effect on January 1, and Sacramento police announced yesterday that they have already arrested their first suspect—a 49-year-old Pulitzer-prize-winning cartoonist named Darrin Bell.

The new law, which you can read here, declares that AI-generated CSAM is harmful, even without an actual victim. In part, says the law, this is because all kinds of CSAM can be used to groom children into thinking sexual activity with adults is normal. But the law singles out AI-generated CSAM for special criticism due to the way that generative AI systems work.

"The creation of CSAM using AI is inherently harmful to children because the machine-learning models utilized by AI have been trained on datasets containing thousands of depictions of known CSAM victims," it says, “revictimizing these real children by using their likeness to generate AI CSAM images into perpetuity.”

Edit: Bolded out certain parts to clarify why they’re doing it.

I’m locking this thread because I won’t have time to watch it.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    20 hours ago

    ICAC Task Force recently opened an investigation into 18 CSAM files being shared online. Further investigation revealed that the sharer was actually offering 134 CSAM videos, and police claim they were able to trace those files to the account of local resident and well-known cartoonist Darrin Bell.

    Its a messy subject but any kind of sharing/selling quickly evaporates any reservation I have on prosecuting. It’s still child porn and, even without any “real” victims, trying to build a community around it is gross.

    The part about using the images to groom children is also an angle I had never really thought about.

  • earphone843
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    6
    ·
    21 hours ago

    While the content is abhorrent, I don’t get the logic of its being trained on already existing images causing harm to the individuals in the images. It’s not like csam was generated for the express purpose of training the AI model.

    • pelespiritOPM
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      21 hours ago

      The new law, which you can read here, declares that AI-generated CSAM is harmful, even without an actual victim. In part, says the law, this is because all kinds of CSAM can be used to groom children into thinking sexual activity with adults is normal. But the law singles out AI-generated CSAM for special criticism due to the way that generative AI systems work.

      “The creation of CSAM using AI is inherently harmful to children because the machine-learning models utilized by AI have been trained on datasets containing thousands of depictions of known CSAM victims,” it says, “revictimizing these real children by using their likeness to generate AI CSAM images into perpetuity.”

      • earphone843
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        21 hours ago

        Yes, you copied and pasted the section of the article I disagree with. Did you have a point?

        • pelespiritOPM
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          12
          ·
          20 hours ago

          Don’t forget the bolded section which answers your questions because it is being trained on specific child porn. You’re dangerously close to sticking up for child porn. I’m also the mod, so tread lightly on that issue.

          • Grimy@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            edit-2
            20 hours ago

            I don’t get the logic of its being trained on already existing images causing harm to the individuals in the images.

            You aren’t answering his question above and completely missing his point.

            I’m also the mod, so tread lightly on that issue.

            And then threatening him.

            He isn’t asking if there is CSAM in the dataset but why it would matter. Granted there is a lot to be said on the subject but you aren’t saying much other than yetrippingbastard behavior.

            • pelespirit
              shield
              OPM
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              12
              ·
              20 hours ago

              I did answer with the bolded text. Move along.

          • earphone843
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            20 hours ago

            The bolded section doesn’t answer my questions because my questions are disagreeing with their assertions.

            • pelespiritOPM
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              13
              ·
              20 hours ago

              Then let’s end this discussion here, I disagree.

    • _core
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      21 hours ago

      If ai generated csam stops actual csam isn’t that a good thing?