• Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    3
    ·
    6 months ago

    This is tough, the goal should be to reduce child abuse. It’s unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don’t abuse children. Like everything else AI, we won’t know the real impact for many years.

      • Dkarma@lemmy.world
        link
        fedilink
        English
        arrow-up
        47
        arrow-down
        4
        ·
        6 months ago

        Lol you don’t need to train it ON CSAM to generate CSAM. Get a clue.

          • mindbleach
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            Generative Machine Learning models have been well documented as being able to produce explicit adult content, including child sexual abuse material (CSAM)

            You can’t generate CSAM because there’s no C to A.

        • LadyAutumn@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          15
          ·
          6 months ago

          It should be illegal either way, to be clear. But you think theyre not training models on CSAM? Youre trusting in the morality/ethics of people creating AI generated child pornography?

          • Greg Clarke@lemmy.ca
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            6 months ago

            The use of CSAM in training generative AI models is an issue no matter how these models are being used.

            • L_Acacia@lemmy.one
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              6 months ago

              The training doesn’t use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.

              • AdrianTheFrog@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                6 months ago

                Well, with models like SD at least, the datasets are large enough and the employees are few enough that it is impossible to have a human filter every image. They scrape them from the web and try to filter with AI, but there is still a chance of bad images getting through. This is why most companies install filters after the model as well as in the training process.

                • DarkThoughts@fedia.io
                  link
                  fedilink
                  arrow-up
                  6
                  ·
                  6 months ago

                  You make it sound like it is so easy to even find such content on the www. The point is, they do not need to be trained on such material. They are trained on regular kids, so they know their sizes, faces, etc. They’re trained on nude bodies, so they also know how hairless genitals or flat chests look like. You don’t need to specifically train a model on nude children to generate nude children.

          • mindbleach
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago

            We’re trusting that billion-dollar corporate efforts don’t possess and label hyper-illegal images, specifically so people can make more of them. Because why the fuck would they.

            • Bridger
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              6 months ago

              If there was more money to be made than the cost of defending it they most definitely would.

              • mindbleach
                link
                fedilink
                English
                arrow-up
                3
                ·
                6 months ago

                ‘Google would love to be in the child pornography business’ is quite a fucking take.

                These assholes are struggling to stop their networks from generating Mickey Mouse even when someone specifically asks for Mickey Mouse. Why would any organization that size want radioactive criminal-to-possess inputs stirred into their venture-capital cash cow?

                • Bridger
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 months ago

                  They’re fine with platforming fascists for a buck. Why would they have a problem with kid porn, especially if they can maintain a veneer of plausible deniability

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        9
        ·
        edit-2
        6 months ago

        I suggest you actually download stable diffusion and try for yourself because it’s clear that you don’t have any clue what you’re talking about. You can already make tiny people, shaved, genitals, flat chests, child like faces, etc. etc. It’s all already there. Literally no need for any LoRAs or very specifically trained models.

      • mindbleach
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        Does an AI image of Shrek riding an avocado motorcycle imply there’s a bunch of images of that, in the data set?