• rufus@discuss.tchncs.de
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    11 months ago

    In computer security it always depends on your thread model. WhatsApp is supposed to be end-to-end-encrypted, so nobody can intercept your messages. However: Once someone flags a message as inappropriate, this gets circumvented and messages get forwarded to Meta. This is only supposed to happen if it’s flagged. So unlikely in a family group. I trust this actually works the way Meta tells us, though I can’t be sure because I haven’t dissected the app and this may change in the future. And there is lawful intercept.

    Mind that people can download or screenshot messages and forward them or do whatever they like with the pictures.

    And another thing: If you have Sync enabled, Google Photos will sync pictures you take with their cloud servers and it’ll end up there. And Apple does the same with their iCloud. As far as I know both platforms automatically scan pictures to help fight crime and child exploitation. We aren’t allowed to know how those algorithms work in detail. I doubt a toddler in clothes or wrapped in a blanket will trigger the automatism. They claim a ‘high level of accuracy’. But people generally advise not to take pictures of children without clothes with a smartphone. Bad incidents have already happened.

    Edit: Apple seems to have pushed for cloud scanning initially, but currently that doesn’t happen any more. They have some on device filters as far as I understand.

    • kirklennon@kbin.social
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      11 months ago

      As far as I know both platforms automatically scan pictures to help fight crime and child exploitation.

      Apple doesn’t. They should but they don’t. They came up with a really clever system that would do the actual scanning on your device immediately before uploading to iCloud, so their servers would never need to analyze your photos, but people went insane after they announced the plan.

      • rufus@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        11 months ago

        Oh. I didn’t know that. I don’t use Apple products and just read the news, I must have missed how the story turned out, so thanks for the info.

        Technically I suppose it doesn’t make a huge difference. It still gets scanned by Apple software. And sent to them if it’s deemed conspicuous. And the algorithm on a device is probably limited by processing power and energy budget. So it might even be less accurate. But this is just my speculation. I think all of that is more of a marketing stunt. This way the provider reduces cost, they don’t need additional servers to filter the messages and in the end it doesn’t really matter where exactly the content is processed if it’s a continuous chain like in the Apple ecosystem.

        The last story I linked about the dad being incriminated for sending the doctor a picture would play out the same way, regardless.

        Edit: I googled it and it seems the story with Apple has changed multiple times. The last article I read says they don’t even do on-device scanning. Just a ‘nude filter’. Whatever that is. I’m cautious around cloud services anyways. And all of that might change and also affect old pictures. We just avoided mandatory content filtering in the EU and upload filters and things like that are debated regularly. Also the US has updated their laws regarding internet crime and prevention of child exploitation in the last years. I’m generally unsure where we’re headed with this.

        • kirklennon@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          11 months ago

          The proposal was only for photos stored on iCloud. Apple has a legitimate interest in not wanting to actually host abuse material on their servers. The plan was also calibrated for one in one trillion false positives (it would require multiple matches before an account could be flagged), followed by a manual review by an employee before reporting to authorities. It was so very carefully designed.

          • rufus@discuss.tchncs.de
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            11 months ago

            Do you happen to know a good source for information on this? I don’t want to highjack this discission, since it’s not that closely related to the original subject… But I’d be interested in more technical information. Most news articles seem to be a bit biased and I get it, both privacy and protection of children are sensible topics and there are feelings envolved.

            One in a trillion sounds like a probability of a hash collision. So that would be just checking if they already have the specific image in their database. It’ll trigger if someone downloaded an already existing image and not detect new images taken with a camera. I’m somewhat fine with that.

            And I was under the impression that iPhones connected to the iCloud sync the pictures per default? So “only for photos stored on iCloud” would practically mean every image you take, unless you deliberately changed the settings on your iPhone?

            • kirklennon@kbin.social
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              11 months ago

              Do you happen to know a good source for information on this?

              Apple released detailed whitepapers and information about it when originally proposed but they shelved it so I don’t think they’re still readily available.

              One in a trillion sounds like a probability of a hash collision.

              Basically yes, but they’re assuming a much greater likelihood of a single hash collision. The system would upload a receipt of the on-device scan along with each photo. A threshold number of matches would be set to achieve the one in a trillion confidence level. I believe the initial estimate was roughly 30 images. In other words, you’d need to be uploading literally dozens of CSAM images for your account to get flagged. And these accompanying receipts use advanced cryptography so it’s not like they’re seeing “oh this account has 5 potential matches and this one has 10”; anything below the threshold would have zero flags. Only when enough “bad” receipts showed up for the same account would they collectively flag it.

              And I was under the impression that iPhones connected to the iCloud sync the pictures per default?

              This is for people who use iCloud Photo Library, which you have to turn on.

              • rufus@discuss.tchncs.de
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                11 months ago

                Thank you very much for typing that out for me! And this seems to be the first sound solution I read about. I think I would happily deploy something like that on my own (potential) server. I have to think about it and try and dig up more information.

                Lately, I’ve been following the news about EU data retention and all they come up with are solutions that are proper surveillance of every citizen, a slippery slope and come with many downsides. The justification is always “won’t somebody please think of the children” and the proposed solution is to break end to end encryption for everyone. They could have just implemented this. Okay I do actually know why they don’t… There is a lobby pushing for general surveillance and protecting children is just their superficial argument to gain acceptance for it. So they’re not interested in effective solutions to deal with the specific problem at all. They want something that actually is a slippery slope and can also be used for other purposes, later on.

                Such a hash-table would at least detect known illegal content. And it doesn’t even trigger on legal content. For example if someone underage sends nudes to their partner consentually. And having it only detect multiple images makes it less likely someone can get attacked by sending them one illegal picture / planting evidence and they’ll instantly be flagged and be raided by police. All the proper cases I read about they always find hundreds of images on a criminal’s harddisk. And the police already said they can’t handle loads of false positives. They’re understaffed and overworked and implementing a solution that’d generate many false positives would lead to them having to deal with it and have even less time to deal with the actual criminals.

                So this sounds like a solution that Apple has put some thought into. It tackles lots of issues that previously were arguments for me to advocate against CSAM filters.