• UnseriousAcademic@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 months ago

    I don’t really understand how it’s possible to both not store data in plaintext, but also be able to siphon off some of it in plaintext. Like is this technically possible in the way they suggest it? We shoot off the plaintext before it gets to our storage servers?

    Like at some point that means the communication is not encrypted right? But if you’re using https and all good normal security standards that should never be the case from the moment it departs your terminal?

    I have a small amount of knowledge about this but it’s the dangerously small type so any illumination would be appreciated.

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 months ago

      Email is never stored unencrypted at rest on Proton’s servers. But AI prompts, which are likely your entire draft email, do exist unencrypted at rest on their servers. That’s what has the privacy nerds screaming.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 months ago

        yep! and the important thing to understand about proton is, the end to end encryption (where one end is the sender of a message and the other is the receiver — Proton never handles plaintext at all, beyond a tiny and clearly called out amount of metadata stored as plaintext on their servers for stuff like Calendars) is the whole point of the thing, there’s no reason to use Proton without it. with this LLM garbage, Proton’s threat model has shifted such that you can’t trust that the other end’s plaintext didn’t get transmitted to Proton’s servers (there’s no way for you, the receiver, to tell that the sender didn’t use the cloud LLM features), which makes Proton a lot less useful for some of the most vulnerable people who use it, such as activists and journalists who might be under legal threat. this plaintext leak allows some of the messages you’ve received to be subpoenaed, and it’s very easy for that to be used in a criminal case against you.

        also, Proton’s published security model for their LLM feature (which is ultra-thin and resembles a PR puff piece more than any other model they published before this) states that their no-log policy is what makes the cloud version of the LLM secure, but their no-log policy has gigantic holes in it, and Proton’s response to these concerns is utterly unbefitting of a privacy/security software company

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        I’d personally consider that sufficient grounds to accuse Proton of stealing its customers’ data.

        At the (miniscule) risk of sounding unnecessarily harsh on tech, any customer data that gets sent to company servers without the customer’s explicit, uncoerced permission should be considered stolen.

      • UnseriousAcademic@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        Ah OK, so it’s sending the email draft in process not sending off the content of incoming messages or your final sent messages. Now I understand. Also, that’s still bad…

    • conciselyverbose
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      The browser/app sees it (by definition; it has to). So it sends it back out to service those requests if that’s what you set up.