Cloudflare-free link for Tor/Tails users: https://web.archive.org/web/20230926042518/https://balkaninsight.com/2023/09/25/who-benefits-inside-the-eus-fight-over-scanning-for-child-sex-content/

It would introduce a complex legal architecture reliant on AI tools for detecting images, videos and speech – so-called ‘client-side scanning’ – containing sexual abuse against minors and attempts to groom children.

If the regulation undermines encryption, it risks introducing new vulnerabilities, critics argue. “Who will benefit from the legislation?” Gerkens asked. “Not the children.”

Groups like Thorn use everything they can to put this legislation forward, not just because they feel that this is the way forward to combat child sexual abuse, but also because they have a commercial interest in doing so.

they are self-interested in promoting child exploitation as a problem that happens “online,” and then proposing quick (and profitable) technical solutions as a remedy to what is in reality a deep social and cultural problem. (…) I don’t think governments understand just how expensive and fallible these systems are

the regulation has […] been met with alarm from privacy advocates and tech specialists who say it will unleash a massive new surveillance system and threaten the use of end-to-end encryption, currently the ultimate way to secure digital communications

A Dutch government official, speaking on condition of anonymity, said: “The Netherlands has serious concerns with regard to the current proposals to detect unknown CSAM and address grooming, as current technologies lead to a high number of false positives.” “The resulting infringement of fundamental rights is not proportionate.”

  • Dodecahedron December
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    That’s the thing. CSAM filtering can be useful when images are posted to public websites. But scanning private files, no matter what the scan is for, is a major privacy concern.

    • Saki@monero.townOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      CSAM is one thing, like pictures? But how can they detect “attempts to groom children”? If someone says on Mastodon that they’re sad at school, and if I commented saying something nice trying to help them feel better, would that be a potential grooming? AI will read & analyze every private message like that? And for better AI “predictions”, every user is required to verify their age, sex, sexual orientation, hobbies, etc?

      Btw Happy Birthday GNU! 🎂