Late last year, California passed a law against the possession or distribution of child sex abuse material (CSAM) that has been generated by AI. The law went into effect on January 1, and Sacramento police announced yesterday that they have already arrested their first suspect—a 49-year-old Pulitzer-prize-winning cartoonist named Darrin Bell.

The new law, which you can read here, declares that AI-generated CSAM is harmful, even without an actual victim. In part, says the law, this is because all kinds of CSAM can be used to groom children into thinking sexual activity with adults is normal. But the law singles out AI-generated CSAM for special criticism due to the way that generative AI systems work.

"The creation of CSAM using AI is inherently harmful to children because the machine-learning models utilized by AI have been trained on datasets containing thousands of depictions of known CSAM victims," it says, “revictimizing these real children by using their likeness to generate AI CSAM images into perpetuity.”

Edit: Bolded out certain parts to clarify why they’re doing it.

I’m locking this thread because I won’t have time to watch it.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    20 hours ago

    I don’t get the logic of its being trained on already existing images causing harm to the individuals in the images.

    You aren’t answering his question above and completely missing his point.

    I’m also the mod, so tread lightly on that issue.

    And then threatening him.

    He isn’t asking if there is CSAM in the dataset but why it would matter. Granted there is a lot to be said on the subject but you aren’t saying much other than yetrippingbastard behavior.

    • pelespirit
      shield
      OPM
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      12
      ·
      20 hours ago

      I did answer with the bolded text. Move along.