• Dkarma@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    4
    ·
    9 months ago

    Lol you don’t need to train it ON CSAM to generate CSAM. Get a clue.

      • mindbleach
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Generative Machine Learning models have been well documented as being able to produce explicit adult content, including child sexual abuse material (CSAM)

        You can’t generate CSAM because there’s no C to A.

    • LadyAutumn@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      15
      ·
      9 months ago

      It should be illegal either way, to be clear. But you think theyre not training models on CSAM? Youre trusting in the morality/ethics of people creating AI generated child pornography?

      • Greg Clarke@lemmy.ca
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        9 months ago

        The use of CSAM in training generative AI models is an issue no matter how these models are being used.

        • L_Acacia@lemmy.one
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          9 months ago

          The training doesn’t use csam, 0% chance big tech would use that in their dataset. The models are somewhat able to link concept like red and car, even if it had never seen a red car before.

          • AdrianTheFrog@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 months ago

            Well, with models like SD at least, the datasets are large enough and the employees are few enough that it is impossible to have a human filter every image. They scrape them from the web and try to filter with AI, but there is still a chance of bad images getting through. This is why most companies install filters after the model as well as in the training process.

      • mindbleach
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        We’re trusting that billion-dollar corporate efforts don’t possess and label hyper-illegal images, specifically so people can make more of them. Because why the fuck would they.

        • Bridger
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          9 months ago

          If there was more money to be made than the cost of defending it they most definitely would.

          • mindbleach
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 months ago

            ‘Google would love to be in the child pornography business’ is quite a fucking take.

            These assholes are struggling to stop their networks from generating Mickey Mouse even when someone specifically asks for Mickey Mouse. Why would any organization that size want radioactive criminal-to-possess inputs stirred into their venture-capital cash cow?

            • Bridger
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              They’re fine with platforming fascists for a buck. Why would they have a problem with kid porn, especially if they can maintain a veneer of plausible deniability