This is entirely the fault of the IWF and Microsoft, who create “exclusive” proprietary CSAM prevention software and then only license big tech companies to use it.

    • BootlegHermit@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      To me it seems like a push towards the whole “own nothing” idea. Whether it’s something like CSAM detection or even mundane SaaS, things are slowly shifting away from the end user having control over their “own” devices.

      I’m torn, because on the one hand, pedophiles and child abusers deserve the severest of consequences in my opinion; on the other hand, I also think that people should be able to do and/or say whatever they want so long as its not causing actual harm to another.

      • Trekman10
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Well, part of it is that like, I wouldn’t know what to do if CSAM showed up on my social media feeds, fediverse or elsewhere. I guess I’d flag it, report it with whatever moderation tools the site provides, but I would want to be able to report it to an actual authority, like “hey I found this suspicious post at this url can you check it out,” that would then follow up with me to give an update on what sort of action was taken.

      • elscallr@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It’s much more likely it’s a matter of preventing their detection technology from falling into the hands of people that would wish to circumvent it.