• UnpluggedFridge@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    6 months ago

    These cases are interesting tests of our first amendment rights. “Real” CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.

    Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.

    So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for “real” images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?

    We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.

    A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as “real,” we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.

    • Corkyskog
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 months ago

      It comes back to distribution for me. If they are generating the stuff for themselves, gross, but I don’t see how it can really be illegal. But if your distributing them, how do we know their not real? The amount of investigative resources that would need to be dumped into that, and the impact on those investigators mental health, I don’t know. I really don’t have an answer, I don’t know how they make it illegal, but it really feels like distribution should be.

    • TheHarpyEagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      6 months ago

      It feels incredibly gross to just say “generated CSAM is a-ok, grab your hog and go nuts”, but I can’t really say that it should be illegal if no child was harmed in the training of the model. The idea that it could be a gateway to real abuse comes to mind, but that’s a slippery slope that leads to “video games cause school shootings” type of logic.

      I don’t know, it’s a very tough thing to untangle. I guess I’d just want to know if someone was doing that so I could stay far, far away from them.

    • yamanii@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      6 months ago

      partly because they are obvious fictions

      That’s it actually, all sites that allow it like danbooru, gelbooru, pixiv, etc. Have a clause against photo realistic content and they will remove it.

    • nucleative@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      Well thought-out and articulated opinion, thanks for sharing.

      If even the most skilled hyper-realistic painters were out there painting depictions of CSAM, we’d probably still label it as free speech because we “know” it to be fiction.

      When a computer rolls the dice against a model and imagines a novel composition of children’s images combined with what it knows about adult material, it does seem more difficult to label it as entirely fictional. That may be partly because the source material may have actually been real, even if the final composition is imagined. I don’t intend to suggest models trained on CSAM either, I’m thinking of models trained to know what both mature and immature body shapes look like, as well as adult content, and letting the algorithm figure out the rest.

      Nevertheless, as you brought up, nobody is harmed in this scenario, even though many people in our culture and society find this behavior and content to be repulsive.

      To a high degree, I think we can still label an individual who consumes this type of AI content to be a pedophile, and although being a pedophile is not in and of itself an illegal adjective to posses, it comes with societal consequences. Additionally, pedophilia is a DSM-5 psychiatric disorder, which could be a pathway to some sort of consequences for those who partake.

    • mindbleach
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      For anyone worried AI requires real photos of whatever it can render: there’s photorealistic furry porn. Show me those photos.

      • explodicle
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        I refuse to Google this, but aren’t there photos of furries doing it with their costumes on?

        • mindbleach
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          I didn’t say photorealistic people-in-fursuits porn.

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 months ago

        for some reason the US seems to hold a weird position on this one. I don’t really understand it.

        It’s written to be illegal, but if you look at prosecution cases, i think there have been only a handful of charged cases. The prominent ones which also include relevant previous offenses, or worse.

        It’s also interesting when you consider that there are almost definitely large image boards hosted in the US that host what could be constituted as “cartoon CSAM” notably e621, i’d have to verify their hosting location, but i believe they’re in the US. And so far i don’t believe they’ve ever had any issues with it. And i’m sure there are other good examples as well.

        I suppose you could argue they’re exempt on the publisher rules. But these sites don’t moderate against these images, generally. And i feel like this would be the rare exception where it wouldnt be applicable.

        The law is fucking weird dude. There is a massive disconnect between what we should be seeing, and what we are seeing. I assume because the authorities who moderate this shit almost exclusively go after real CSAM, on account of it actually being a literal offense, as opposed to drawn CSAM, being a proxy offense.

        • PirateJesus@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          It seems to me to be a lesser charge. A net that catches a larger population and they can then go fishing for bigger fish to make the prosecutor look good. Or as I’ve heard from others, it is used to simplify prosecution. PedoAnon can’t argue “it’s a deepfake, not a real kid” to the SWAT team.

          There is a massive disconnect between what we should be seeing, and what we are seeing. I assume because the authorities who moderate this shit almost exclusively go after real CSAM, on account of it actually being a literal offense, as opposed to drawn CSAM, being a proxy offense. This can be attributed to no proper funding of CSAM enforcement. Pedos get picked up if they become an active embarrassment like the article dude. Otherwise all the money is just spent on the database getting bigger and keeping the lights on. Which works for congress. A public pedo gets nailed to the wall because of the database, the spooky spectre of the pedo out for your kids remains, vote for me please…

          • KillingTimeItself@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            It seems to me to be a lesser charge. A net that catches a larger population and they can then go fishing for bigger fish to make the prosecutor look good. Or as I’ve heard from others, it is used to simplify prosecution. PedoAnon can’t argue “it’s a deepfake, not a real kid” to the SWAT team.

            ah that could be a possibility as well. Just ensuring reasonable flexibility in prosecution so you can be sure of what you get.