• dragonfornicator@partizle.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It’s something that not talked about, which, given our data-obsessed world, i interpret as “we just do it by default (because nobody will complain, it’s normal, yada yada)”.

      Besides, it’s stated that the scanning itself does only happen on your device. If you scan locally for illegal stuff, it’s not really far fetched that someone gets informed about someone having, for example, CSAM on their device. Why else would you scan for it? So at the very least, that information is collected somewhere.

      • bouncing@partizle.comM
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        I think your threat model for this is wrong.

        First of all, understand how it works: it’s a local feature that uses image recognition to identify nudity. The idea is, if someone sends you a dick pick (or worse, CSAM), you don’t have to view it to know what it is. That’s been an option on the accounts of minors for some time now and it is legitimately a useful feature.

        Now they’re adding it as an option to adult accounts and letting third party developers add it to their apps.

        The threat that suddenly they’re going to send the scanning results to corporate without telling anyone seems unlikely. It would be a huge liability to do so and have no real benefits for them.

        But the threat is this: with this technology available, there will be pressure to make it not optional (“Why does Apple let you disable the child porn filter — wtf?”). If they bend to that pressure then why not introduce filters for other illegal content. Why not filter for comments criticizing the CCP in China or content that infringes on copyright?

        Having a “dick pick filter” is a useful technology and I know some people who would love to have it. That doesn’t mean the technology could be misused for nefarious purposes.

        • dragonfornicator@partizle.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I am aware that it’s local, i just assumed it would also call home.

          My threat model here is based on cases like this: https://www.theverge.com/2022/8/21/23315513/google-photos-csam-scanning-account-deletion-investigation

          And yes, i did see it as a privacy issue, not a censorship one. Inevitably, if this finds the pressure to expand it towards other content, it could be a problem comparable to the “Article 13” Europe was, or is, facing.

          Generally, blocking specific types of content is a valid option to have. As long as it is an option, and the user knows it is an option. I just distrust it coming from the likes of google or apple.

            • dragonfornicator@partizle.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Well, thank you for clarifying. I was not aware of what exactly apple or google where communicating regarding their platforms.

          • theonlykl@partizle.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            I would honestly find it very difficult to believe that there wasn’t going to be some telemetry, data / etc sent back to the mothership. I know in the marketing realm Apple caters towards “privacy”, but who’s really validating those claims.

            Granted…I’m also very tin-foil-hatty about my data and retain it all locally with offsite backups. I tore down my Google Drive / cloud data about 2-years ago.

      • renohren@partizle.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        I find it very possible for an on device scanning taking place with current bionic chips and Apple being against any server side data collection: If 1 person gets flagged because of a false positive, Apple’s Privacy reputation in the general public goes down the drain, there would be too much damage to the brand. The ability to deny knowledge of anything to do with their customers activities is also the biggest reason for their push to E2E encryption. Using server side data collection about CSAM would disrupt that plausible deniability argument in all matters.

        • Foxygen@partizle.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Not just damage to the brand, but also, a lawsuit. They flatly say they aren’t phoning home with detection results. If they are, that opens them up to legal remedies from people who were lied to.

          Maybe Anker gets away with just flat-out lying (about e2e encryption for example) but a huge publicly traded company this side of the Pacific is another matter.

        • dragonfornicator@partizle.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          If you think about it, apples privacy reputation doesn’t matter to begin with. First, it’s just a reputation. It’s what they claim, not necessarily what they do. They are a multi trillion dollar tech giant, you don’t get there with honesty. But regardless, imagine their reputation goes down the drain. The consensus of “If you have nothing to hide, you have nothing to fear”, the ability to claim everything being to “protect the children” (or, related, “protect the country form terrorism”) all negate the necessity for that reputation. As sad as it is, most people don’t think too much about privacy, at least not in regards to modern technology. People will still buy their products and be part of the ecosystem. Apple is a luxury brand, their products used as status symbols, their most loyal customers are essentially a cult and for many it’s all they know. That is, if such a case gets big enough to “go viral”.

          • bouncing@partizle.comM
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            I think you’re underestimating it. They just introduced e2e encryption for almost all iCloud content. That’s not something there was that much market pressure to implement.

              • bouncing@partizle.comM
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                I don’t know that likes apply at all.

                To my understanding it’s all the metadata though. What’s not included are contacts, calendar, and email—because there’s no way to implement it with carddav, imap, etc.

          • mikeymike@partizle.comM
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            It’s also what they do.

            • Private relay
            • Tracking protection
            • Full e2e encryption where feasible
            • Device encryption
            • App tracking protection

            It’s a brand, yes, but it is absolutely reasonable to ask why they would want to flush that reputation away.

              • mikeymike@partizle.comM
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Source code to confirm it would be nice, but security researchers crawl all over this stuff.

                Also, they have no real incentive to do otherwise. As product features, these don’t just sell products, they actually reduce the administrative load on Apple because then Apple doesn’t have to deal with as many data requests.