I’m learning a language, I speak it in public to other people who do. I don’t research the language, because I have some old text books on it. My partner doesn’t speak it and doesn’t research it on their devices. I don’t normally have my phone on me in public, but my partner does. It took about 4 months of publicly speaking in the language before they got ads

What do you think this means?

::edit::

It was a Reddit ad and my city has embraced those AI smart cameras, so I assume some of those are Google owned which makes sense with Reddit and Google’s recent alliance. This is assuming our devices aren’t listening to us without our permission and AI cameras are mining data on passersby

Other theories are that since cellphones are involved it doesn’t matter if I nor my partner ever searched for the language, at some point my phone or partner’s phone was near someone who spoke that language and the data brokers/ad sellers inferred from there

Seems like the consensus is that I must have posted in the language on some social media or used Google to research it or made some new friends who speak the language and that’s why

  • EngineerGaming@feddit.nl
    link
    fedilink
    arrow-up
    20
    ·
    5 months ago

    Yea, that is my problem with the “always listening” theory. I am sure they’re capable of that, but don’t think they’re doing it just because they can get more data with a fraction of the cost by more “traditional” tracking.

    In a way, it is scarier than listening - because listening is far easier to understand than the multitude of ways the data is collected and combined.

    • otp
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      Exactly. They definitely could, but there’d also be potential legal issues, and it’d just be much more expensive to analyze sound data.

      If it’s done on each device, then their battery power would suck, and performance would decline. Sure, they could do that, but I imagine most phone manufacturers would rather sell more phones and make money from app companies (Meta, Google) who pay to have their apps pre-installed on the phone. Or Samsung and Apple, who have their own ecosystems for mining data like Google does.

      If they were instead just uploading audio to central servers (which could mitigate legal issues due to “anonymizing” the data), then they’d be paying for the computational power to analyze all that data.

      Again, completely possible, and likely in use with things like Alexa and Google Home. But on our phones (and laptops for that matter), they have so many other cheaper ways to get probably the same quality of information.

      • EngineerGaming@feddit.nl
        link
        fedilink
        arrow-up
        4
        ·
        5 months ago

        It is not about “too risky”. It is about “costs much more in processing power while providing a fraction of the info”.

        • ArcaneSlime@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          5 months ago

          Tbh, my theory wouldn’t be too much more.

          My theory is not that they’re always listening and sending a live stream of audio back to some dingus in headphones. It has to be always listening for the trigger words, right? That’s how it works, you say “yo siri” and she hears you, so it must be listening at least for the trigger word. With that in mind, what’s to stop them from using other, secret trigger words, which may even behave differently than the advertised ones? Like say I’m Joe Bridgestone, and I pay google to add “tires” or “new tires” as silent trigger words, and instead of “activating” the google bitch it sends ads for Bridgestone Tires? Why wouldn’t that be possible? It’d also be harder to catch them, as opposed to “literally a 24/7 hot mic.” Tbh I find it hard to believe that the NSA hasn’t at least tried to get this up for words like “bomb” or whatever, too.