Titled “Enhanced Visual Search,” this toggle permits iPhones to transmit photo data to Apple by default, raising concerns about user privacy and data-sharing practices.

    • Mikina@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Tbh I’m not sure, I vaguely remember that hashes did play a role in how chatcontrol works, but I think it wasn’t looking just for 1:1 match of known illegal content, but also for some signs? I remember reading that it had awfully high false-positive rate, which someone has to check. https://www.patrick-breyer.de/en/posts/chat-control/

      According to the Swiss Federal Police, 80% of the reports they receive (usually based on the method of hashing) are criminally irrelevant. Similarly in Ireland only 20% of NCMEC reports received in 2020 were confirmed as actual “child abuse material”.

      • TaviRider@reddthat.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        The 1:1 matching and the porn detection were separate capabilities.

        Porn detection is called Communication Safety, and it only warms the user. If it’s set up in Screen Time as a child’s device, someone has to enter the parent’s Screen Time passcode to bypass the warning. That’s it. It’s entirely local to the device. The parent isn’t notified or shown the image, and Apple doesn’t get the image. It’s using an ML model, so it can have false positives.

        CSAM detection was exact 1:1 matching using a privacy-preserving hashing system. It prevented users uploading known CSAM to iCloud, and that’s it. Apple couldn’t tell if there was a match or find out the hashes of images being evaluated.

        Many people misunderstood and conflated the two capabilities, and often claimed without evidence that they did things that they were designed never to do. Apple abandoned the CSAM detection capability.