• Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      1
      ·
      edit-2
      6 months ago

      It’s local. You’re not sending data to their servers.

      We’re looking at how we can use local, on-device AI models – i.e., more private – to enhance your browsing experience further. One feature we’re starting with next quarter is AI-generated alt-text for images inserted into PDFs, which makes it more accessible to visually impaired users and people with learning disabilities. The alt text is then processed on your device and saved locally instead of cloud services, ensuring that enhancements like these are done with your privacy in mind.

      At least use the whole quote.

      • festnt
        link
        fedilink
        arrow-up
        19
        ·
        6 months ago

        yeah, of course its gonna look like its not local if you take out the part where it says its local

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 months ago

      That’s somewhat awkward phrasing but I think the visual processing will also be done on-device. There are a few small multimodal models out there. Mozilla’s llamafile project includes multimodal support, so you can query a language model about the contents of an image.

      Even just a few months ago I would have thought this was not viable, but the newer models are game-changingly good at very small sizes. Small enough to run on any decent laptop or even a phone.