An in-depth police report obtained by 404 Media shows how a school, and then the police, investigated a wave of AI-powered “nudify” apps in a high school.

  • gravitas_deficiency
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    1
    ·
    10 months ago

    Wow I am so glad this shit wasn’t around when I was in high school. This sounds horrifying to have to deal with as a student.

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      10 months ago

      The indifference of AI advocates is just apalling too. “It’s going to happen anyway, they just gotta get over it, it’s not real.”

      Teens can barely get over mild perceived embarrassment, nevermind having porn replicas of them spread around, and having to deal with related harassment. I dread how that will affect them.

    • pdxfed@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      10 months ago

      There was an article 2-3 months back about a small town in Spain being torn apart by this kind of thing and I braced as I knew it couldn’t be long. What a time to be a kid or parent.

    • boogetyboo@aussie.zone
      link
      fedilink
      English
      arrow-up
      9
      ·
      10 months ago

      Yep. My school years predate social media and mobile phones were pre smartphones. Shit cameras etc.

      but the rumour mill was vicious and damaging. Very hard to prove the truth when the lie is so salacious. The more you denied it the more it seemed true.

      These poor kids. How do they defend against this shit.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      Just when I get the feeling kids these days are a little more tolerant than the last generation, NOPE! “Hey Chris! Watch this 5-min long video of our teacher taking a troll cock!”

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    arrow-down
    4
    ·
    10 months ago

    This is going to get pretty damn horrific real fast with Sora coming to general release soon. We need some restrictions and laws on the books.

      • Int_not_found@feddit.de
        link
        fedilink
        English
        arrow-up
        27
        arrow-down
        1
        ·
        edit-2
        10 months ago

        1.) Germany has civil laws giving a person depicted similar rights as the creator of an image. It is also an criminal offense publishing images, that are designt to damage an persons public image, Those aren’t perfect, mainly because there wording is outdated, but the more general legal sentiment is there.

        2.) The police traces the origin through detective work. Social Cycles in schools aren’t that huge so p2p distribution is pretty traceable & publishing sites usually have ip-logs.

        A criminal court decides the severity of the punishment for the perpetrator. A civil court decides about the amount of monetary damages, that were caused and have to be compensated by the perp or his/her legal guardian.

        People simply forwarding such material can also be liable (since they are distributing copyrighted material) & therefore the distribution can be slowed or stopped.,

        3.) It gives the police a reason to investigate, gives victims a tool to stop distribution & is a way to compensate the damages caused to victims

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          9
          ·
          edit-2
          10 months ago

          1.) Germany has civil laws giving a person depicted similar rights as the creator of an image. It is also an criminal offense publishing images, that are designt to damage an persons public image, Those aren’t perfect, mainly because there wording is outdated, but the more general legal sentiment is there.

          Germany also has laws criminalizing insults. You can actually be prosecuted for calling someone an asshole, say. Americans tend to be horrified when they learn that. I wonder if feelings in that regard may be changing.

          AFAIK, it is unusual, internationally, that the English legal tradition does not have defamation (damaging someone’s reputation/public image) as a criminal offense, but only as a civil wrong. I think Germany may be unusual in the other direction. Not sure.

          2.) The police traces the origin through detective work. Social Cycles in schools aren’t that huge so p2p distribution is pretty traceable & publishing sites usually have ip-logs.

          Ok, the police would interrogate the high-schoolers and demand to know who had the pictures, who made them, who shared them, etc… That would certainly be an important life lesson.

          The police would also seize the records of internet services. I’d think some people would have concerns about the level of government surveillance here; perhaps that should be addressed.

          How does that relate to encryption, for example? Some services may feel that they avoid a lot of bother and attract customers by not storing the relevant data. Should they be forced?

          3.) It gives the police a reason to investigate, gives victims a tool to stop distribution & is a way to compensate the damages caused to victims

          That’s what you want to happen. It does not consider what one would expect to actually happen. It’s fairly common for people of high school age to insult and defame each other. Does the German police commonly investigate this?

          • Int_not_found@feddit.de
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            5
            ·
            edit-2
            10 months ago

            Germany also has laws criminalizing insults. You can actually be prosecuted for calling someone an asshole, say. Americans tend to be horrified when they learn that. I wonder if feelings in that regard may be changing.

            I don’t care about the feelings of Americans reading this. Tbh

            Germany is a western liberal democracy, same as the US.

            On the other hand I’m horrified, that you seem to equate a quick insult with Deepfake-Porn of Minors.

            The police would also seize the records of internet services. I’d think some people would have concerns about the level of government surveillance here; perhaps that should be addressed.

            Arguably the unrestricted access of government entities to this kind of data is higher in the US then the EU.

            How does that relate to encryption, for example? Some services may feel that they avoid a lot of bother and attract customers by not storing the relevant data. Should they be forced?

            There are many entities that store data about you. Maybe the specific service doesn’t cooperate. But what about the server-hoster, maybe the ad-network, maybe the app-store, certainly the payment processor.

            If the police can layout how that data can help solve the case, providers should & can be forced by judges to give out that data to an certain extent. Both in the US and the EU

            Does the German police commonly investigate this?

            Insults? No, those are mostly a civil matter not a criminal one

            (Deepfake-) Porn of Minors? Yes certainly

            • General_Effort@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              What’s with the downvotes? Lemmy is usually pretty negative on the whole data gathering thing, I thought. Shouldn’t I have brought this up? I don’t get it.

            • General_Effort@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              8
              ·
              10 months ago

              On the other hand I’m horrified, that you seem to equate a quick insult with Deepfake-Porn of Minors.

              I don’t. Maybe a language issue.

              Arguably the unrestricted access of government entities to this kind of data is higher in the US then the EU.

              Yes. The GDPR in particular means that service providers may not be allowed to collect the data you want to use for prosecution. Which is one reason I brought this up.

              It is refreshing to see someone who is not at all concerned about privacy. I usually feel that people are far too concerned about surveillance. I don’t remember the last time that I felt that a bit more concern would be good.

              For purposes of enforcement, the degree of surveillance allowed/mandated is important. The copyright industry has a large financial interest in tracking data on the net. I’m pretty sure that German High Schoolers don’t have too much trouble pirating media. This shows that there are practical limits to enforcing bans on internet services and data sharing. Technologically, a lot more surveillance would be possible. It would cost money that would have to be paid with the internet bill and through higher costs of various services. It would also have a chilling effect on various aspects of civil society.

              Discussing the pros and cons of total surveillance goes too far from the topic, I think.

              The question is simply: How well do you want this to be enforced and are you willing to pay the price (not only in money)?

              Insults? No, those are mostly a civil matter not a criminal one

              No, insults are firstly a criminal matter.

        • iAvicenna@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          10 months ago

          All they have to do is make public examples of some kids and parents and that will be an end to this in a couple years.

        • RememberTheApollo@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          8
          ·
          10 months ago

          Digitally watermark the image for identification purposes. Hash the hardware, MAC ID, IP address, of the creator and have that inserted via steganography or similar means. Just like printers use a MIC for documents printed on that printer, AI generated imagery should do the same at this point. It’s not perfect and probably has some undesirable consequences, but it’s better than nothing when trying to track down deepfake creators.

          • abhibeckert@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            edit-2
            10 months ago

            The difference is it costs billions of dollars to run a company manufacturing printers and it’s easy for law enforcement to pressure them into not printing money.

            It costs nothing to produce an AI image, you can run this stuff on a cheap gaming PC or laptop.

            And you can do it with open source software. If the software has restrictions on creating abusive material, you can find a fork with that feature disabled. If it has stenography, you can find one with that disabled too.

            You can tag an image to prove a certain person (or camera) took a photo. You can’t stop people from removing that.

            • yamanii@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              10 months ago

              Samsung’s AI does watermark their image at the exif, yes it is trivial for us to remove the exif, but it’s enough to catch these low effort bad actors.

              • General_Effort@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                10 months ago

                I think all the main AI services watermark their images (invisibly, not in the metadata). A nudify service might not, I imagine.

                I was rather wondering about the support for extensive surveillance.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        I can better explain what I don’t want out there, and we’ve already seen some pretty awful shit. This will usher in a completely new generation of CSAM, bullying, harassment, and political sabotage. It seems we should have some protections or penalties in place to cover such things.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          10 months ago

          You say what you (don’t) want and trust in the experts (IE your representatives) to get us there. That’s reasonable. Suppose no new laws are made, perhaps because the experts think that new laws would do more harm than good, would you continue to trust the experts?

    • redcalcium@lemmy.institute
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      10 months ago

      I kinda doubt porn would be a problem with Sora, just like it’s not a problem with Dall-e. The model is locked down in openai’s servers and open source model is no where as good yet. Even if there is a comparable downloadable models, it’ll be computationally expensive I doubt teens can freely run it.

      • just another dev@lemmy.my-box.dev
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        1
        ·
        10 months ago

        open source model is no where as good yet

        By the time a law would be adopted, it probably will be. I wouldn’t want to rely on the “kindness” of commercial entities as the sole protector of consumer welfare. We’ve seen how well that works with Google and Facebook.

      • cm0002@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        10 months ago

        Even if there is a comparable downloadable models, it’ll be computationally expensive I doubt teens can freely run it.

        For now, but with every new tech, hardware efficiency optimization is not too far behind. Especially considering the performance required for training != performance required for running/outputting.

        Considering the glacier speeds our government moves, I’d bet on those hardware efficiency optimizations making it out before any significant law gets implemented.

        • GluWu@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          Wait for ASICs. When I started mining bitcoin on dual core cpus I would have never believed a little USB stick would be doing 100x what 30 of those full sized pcs could. The software behind generative models still needs to be fleshed out for hardware to know what direction to go. But a little AI machine that only draws like 50w max but has 256gb of high speed RAM for model storage is not far away. Anyone with $500(or whatever idk) will be able to order a machine off ebay, load it up with whatever models they want, and start generating whatever they want locally.

          • GigglyBobble@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            10 months ago

            I don’t think it’ll be as easy as calculating SHA256 hashes, so ASICs as small as this might never be a thing.

            On the other hand, brains do use orders of magnitude less power, so who knows.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        10 months ago

        Um… the Taylor Swift porn deepfakes were Dall-e.

        Sure - they try to prevent that stuff, but it’s hardly perfect. And not all bullying is easily spotted. Imagine a deepfake of a kid sending a text message, but the bubbles are green. Or maybe they’re smiling at someone they hate.

        Also, stable diffusion is more than good enough for this stuff. It’s free and any decent gaming laptop can run that. Takes mine 20 seconds to produce a decent deepfake… I’ve used it to touch up my own photos.

        • just another dev@lemmy.my-box.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          the Taylor Swift porn deepfakes were Dall-e.

          Got a source for that? I only have experience with DALL-E 3, but that is really picky about nudity, copyright and portrait right.

  • some_guy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    38
    arrow-down
    1
    ·
    10 months ago

    I’m so relieved to be an average or maybe slightly below average looking adult male with an established career. It would be hell being a teenage girl in today’s world.

  • disgrunty@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    10 months ago

    And this is why I refuse to take part in selfie culture. Shit’s dangerous. When can we go back to the anonymous internet of yore?

  • Pratai@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    AI should never have been allowed to get where it is without a system in place to control it. But that’s how these things work. So… no one should really be surprised that assholes will utilize for the the greater goal of worsening of civilization for their own amusement.