This is entirely the fault of the IWF and Microsoft, who create “exclusive” proprietary CSAM prevention software and then only license big tech companies to use it.
This is entirely the fault of the IWF and Microsoft, who create “exclusive” proprietary CSAM prevention software and then only license big tech companies to use it.
https://gleasonator.com/objects/bf56ad41-7168-4db9-be17-23b7e5e08991
It totally looks to me like Big Tech is gonna try to leverage CSAM prevention against the Fediverse. “Oh you want to prevent sex crimes against CHILDREN? Sure, but only on our proprietary services because we’re certainly not gonna fight CP for FREE!”
To me it seems like a push towards the whole “own nothing” idea. Whether it’s something like CSAM detection or even mundane SaaS, things are slowly shifting away from the end user having control over their “own” devices.
I’m torn, because on the one hand, pedophiles and child abusers deserve the severest of consequences in my opinion; on the other hand, I also think that people should be able to do and/or say whatever they want so long as its not causing actual harm to another.
Well, part of it is that like, I wouldn’t know what to do if CSAM showed up on my social media feeds, fediverse or elsewhere. I guess I’d flag it, report it with whatever moderation tools the site provides, but I would want to be able to report it to an actual authority, like “hey I found this suspicious post at this url can you check it out,” that would then follow up with me to give an update on what sort of action was taken.
It’s much more likely it’s a matter of preventing their detection technology from falling into the hands of people that would wish to circumvent it.