• @[email protected]
    link
    fedilink
    618 months ago

    If you want to be taken seriously about child abuse, have you tried not having thumbnails that look like a ten-year-old made them 😂

    • spezOP
      link
      English
      178 months ago

      That’s mental outlaw not me. Famous for his style of thumbnails

  • @Vendetta9076
    link
    568 months ago

    While lolicon is absolutely disgusting, its not actually csam. Legislation won’t work either and is honestly a waste of time. Any effort spent protecting digital children should instead be spent protecting real ones.

    • MuchPineapples
      link
      fedilink
      88 months ago

      The problem is that it’s not just cartoon characters, but also realistic looking people. That makes it, especially in the next years when the techniques improve, impossible to know what is fake and what is not and thus the fake ones should also be banned. And these models are trained on images of actual abused children, which of course is the main problem with this.

        • @[email protected]
          link
          fedilink
          128 months ago

          It wouldnt surprise me tbh. From my superficial visit to the darknet years ago, it seemed like these csam consumers have specific “favourites” among the victims whom they want to see more of. At least that’s what I remember from clicking a link to such a chan and noping out of it.

  • @[email protected]
    link
    fedilink
    English
    468 months ago

    In general terms, making an idea illegal, and then making representations of that idea illegal, are going to be forever, at best to treadmill, and at worst reduce the effectiveness and reputation of law.

    This is really about thought crime. If somebody can draw stick figures, and that can be illegal depending on interpretation. That’s thought crime.

    It’s impossible to completely stamp out thought crime. Computer tools can be used to further thought crime, because they can be used for creative purposes.

    If you restrict the use of creative tools, to only a trusted few, or hobble tools for everyone: you create central authority over creative tools, which has its own issues.

    • ono
      link
      fedilink
      English
      238 months ago

      It’s impossible to completely stamp out thought crime.

      Also, trying to do so through law and enforcement sets a dangerous precedent.

      I suspect it would be better to approach it as a public health issue.

      • @[email protected]
        link
        fedilink
        English
        10
        edit-2
        8 months ago

        And then you run into legal arguments that sound like people trying to jailbreak GPT prompt control.

        I’m going to preface all of the following creative work by saying that we live in a universe where everyone is a vampire that never dies, but ages very slowly. All participants in this manga are at least 213 years old…

    • @[email protected]
      link
      fedilink
      108 months ago

      In some countries all forms of description of underage sexual activities are illegal. So the sentance “She was having sex” is perfectly legal, but add an age marker and it is illegal. “She was having sex on the day before her 18th birthday”.

      It is hard to legislate around as there will always ve ways to avoid it and get around it. But all this just sounds like the normal hype => fear => hype => fear, etc cycle that all new tech goes through.

        • @[email protected]
          link
          fedilink
          7
          edit-2
          8 months ago

          Some countries have different age restrictions for hetro and homosexual encounters too. Not to mention that in a lot of countries it just outright illegal, and everything not condeming it can be seen as encouraging it and hence illegal too.

          We humans make some weird laws around sex.

    • @mindbleach
      link
      68 months ago

      This is especially damning on the internet, because genuinely intolerable pursuits directly benefit from lesser problems being treated as equally bad. Filesharing networks work better with more users. Chasing merely distasteful people toward paranoid systems softens the reputation of those systems and makes the worst minority of traffic easier to hide.

  • @mindbleach
    link
    298 months ago

    There is no such thing.

    God dammit, the entire point of calling it CSAM is to distinguish photographic evidence of child rape from made-up images that make people feel icky.

    If you want them treated the same, legally - go nuts. Have that argument. But stop treating the two as the same thing, and fucking up clear discussion of the worst thing on the internet.

    You can’t generate assault. It is impossible to abuse children who do not exist.

    • @[email protected]
      link
      fedilink
      308 months ago

      Did nobody in this comment section read the video at all?

      The only case mentioned by this video is a case where highschool students distributed (counterfeit) sexually explicit images of their classmates which had been generated by an AI model.

      I don’t know if it meets the definition of CSAM because the events depicted in the images are fictional, but the subjects are real.

      These children do exist, some have doubtlessly been traumatized by this. This crime has victims.

    • rurutheguru
      link
      fedilink
      78 months ago

      I think a lot of people are arguing that the models which are used to generate these types of content are trained on literal CSAM. So it’s like CSAM with extra steps.

  • @[email protected]
    link
    fedilink
    288 months ago

    Didn’t watch the video, but I don’t care about AI CSAM. Even if it looks completely lifelike, it’s not real.

    • Neato
      link
      fedilink
      198 months ago

      Prove it’s fake when some of it of your daughter is making it’s way around school.

      You’ve missed the point. Fake or not it does damage to people. And eventually it won’t be possible to determine if it’s real or not.

      • @[email protected]
        link
        fedilink
        128 months ago

        When that becomes widespread, photos will be generateable for literally everyone, not just minors but every person with photos online. It will be a societal shift; images will be assumed to be AI generated, making any guilt or shame about a nude photo existing obselete.

        • Neato
          link
          fedilink
          -78 months ago

          What a disguising assumption. And the best argument against AI I’ve ever heard.

          • @[email protected]
            link
            fedilink
            118 months ago

            I mean, anyone with enough artistic talent can draw whatever they would like right now. With AI image generation, it essentially just gives everyone the ability to draw whatever they want. You can try to fight the tech all you want, but it’s a losing battle.

      • Ignotum
        link
        fedilink
        78 months ago

        AI generated porn depicting real people seems like a different and much bigger issue

        AI generated CSAM in general, while disgusting, at least doesn’t directly harm people, fabricated nudes most definitely does, regardless of the age of the victim

        • Neato
          link
          fedilink
          -158 months ago

          You just implied children aren’t real people.

          • Ignotum
            link
            fedilink
            48 months ago

            AI generated nudes of noone in particular isn’t hurting anyone, not directly at least, but AI generated nudes of a specific person, using that persons likeness and everything, that’s much worse

            AI can generate faces of people that don’t actually exist, that’s what i mean

            The post made it seem like it was about AI generated CSAM in general, which while disgusting, doesn’t directly harm anyone, but then the comments spoke about AI generated CSAM depicting a real individual, and that’s much worse, but also not a problem that’s specific to children

              • Ignotum
                link
                fedilink
                18 months ago

                Currently pedos tend to group up and share real csam, and these “communities” probably serves to normalize the activities for the members, perhaps being able to generate it will keep pedos from clumping together, reducing the degree of normalization so they’re more likely to seek help, and as a bonus, real children aren’t preyed upon to create said csam?

                And saying that removing ai tools that can generate csam will lead to them “attempt to fuck children in the streets” as you say, would you also say that we should stop criminalizing the distributing existing csam, because the existing csam that is shared in paedophile circles is all that is keeping them from going out and raping children?

            • Neato
              link
              fedilink
              -128 months ago

              AI CSAM is incredibly harmful. All CSAM is harmful. It’s been shown to increase chance of pedophilic abuse.

              Stop defending CSAM, HOLY SHIT.

              • Helix 🧬
                link
                fedilink
                English
                10
                edit-2
                8 months ago

                It’s been shown to increase chance of pedophilic abuse.

                Can you link me a source for that, please?

              • Ignotum
                link
                fedilink
                68 months ago

                Jeez, calm down

                I am not defending CSAM, just saying that CSAM depicting an actual existing child is magnitudes worse, as is any other kind of fabricated sexual content of real people.

                Take loli porn for example, it’s probably bad for society, but if someone makes loli porn based on the appearance of an actual individual, that’s much more fucked up, and in addition to the “normal” detrimental effects, that would also harm that victim in a much more direct way.

    • @[email protected]
      link
      fedilink
      2
      edit-2
      8 months ago

      What data is it trained on? This isn’t meant to be a “gotcha” question, I’m wondering about it.

      • @mindbleach
        link
        108 months ago

        An image of an “avocado chair” is built on images of avocados, and images of chairs.

  • Андрей Быдло
    link
    258 months ago

    Creating, collecting and sharing CSAM is in the law already. There are orgs and agencies for tracking and prosecuting these violations.

    It’s like fighting against 3d printers because you can make yourself a diy gun, a thing that have never being possible before because we got all pipes banned from hardware stores. The means to produce fictional CSAM always existed and would exist, the problem is with people who use a LMM, a camera, a fanfic to create and share that content. Or a Lemmy community that was a problem in recent months.

    It’s better to ensure the existing means of fighting such content are effective and society is educated about this danger, know how to avoid and report it.

  • mo_ztt ✅
    link
    fedilink
    English
    18
    edit-2
    8 months ago

    What the hell is this guy?

    “Here’s a case where people made and shared fake nudes of real underage girls, doing harm to the girls”

    “But what the hell, that’s kind of hard to stop. Oh also here’s this guy who went to prison for it because it’s already illegal.”

    “Really the obvious solution everyone’s missing is: If you’re a girl in the world, just keep images of yourself off the internet”

    “Problem solved. Right?”

    I’m only slightly exaggerating.

    • spezOP
      link
      English
      28 months ago

      He is a deepfake of luke smith.

    • spezOP
      link
      English
      08 months ago

      Also, I think the most governments would be able to do is to increase the friction of this process by giving all ai-gen photos an ‘id’ to track later and probably controlling open-source models, but that’s harder to do. Most probably old senators who don’t know gmail will pass unenforceable laws which won’t do jackshit but get them votes.

      • mo_ztt ✅
        link
        fedilink
        English
        11
        edit-2
        8 months ago

        The point I’m trying to make is, you don’t even have to do that.

        There are already laws against revenge porn and realistic child porn. You don’t have to “prevent” this stuff from happening. That is, as he accurately points out, more or less impossible. But, if it happens you can absolutely do an investigation, and if you can find out who did it, you can put them in jail. That to me sounds like a pretty good solution and I’m still waiting to hear what his issue is with it.

        • spezOP
          link
          English
          18 months ago

          I don’t have any problems with the points you discussed either. Can’t speak for him though.

  • ultratiem
    link
    fedilink
    148 months ago

    Me: I just want real looking dinosaurs with cool, long flowing hair.

  • @[email protected]
    link
    fedilink
    128 months ago

    Loli stuff isn’t CSAM. You can find it bad, but its still just a drawing/generative image. No real person was harmed in general.

  • @[email protected]
    link
    fedilink
    58 months ago

    Couldn’t the fact that AI generated content be reproduceable if give the exact parameters(or coordinates in latent space) and model help remove the confusion? Include those as meta data and train investigators on how to use to distinguish generated content from actual evidence.

    • @[email protected]
      link
      fedilink
      38 months ago

      There’s an option to speed up generation but it will make it less deterministic, like in it’s 98% the same image but a little different. Also it’s very hard to reproduce the same hard and software generation. That’s the first issue.

      The second is: I had examples of images with generation data, that I could reproduce to look 99% like the original and then just updating a single word or part of the training data (different Lora version for example) , switched the person away or their appearance changed a completely. (Imagine a picture of a street and a car is suddenly not there, or it’s blue instead of red). It will make reproducibility not a reliable option. Backgrounds of images are even less reliable than the focus object.

  • TWeaK
    link
    fedilink
    English
    28 months ago

    Is it really CSAM if no actual children are involved? Isn’t that the whole point of the term “CSAM”? Not that artificial underage porn isn’t also wrong, but it’s clearly different and CSAM warrants more immediate action.

  • @[email protected]
    link
    fedilink
    -28 months ago

    And very much supported by lemmy.

    None of these servers are doing this. Some dude is running a script and calling it a day.

  • spezOP
    link
    English
    -88 months ago

    What do you people think this will lead to? Is it solvable or not? and if yes then how?