• evo
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    I can’t find a single production app that uses MLC LLM (because of the reasons I listed earlier (like multi GB models that aren’t garbage).

    Qualcomm announcement is a tech demo and they promised to actually do it next year…

    • sciencesebi@feddit.ro
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Who said about production and non-garbage? We’re not talking quality of responses or spread. You can use distilled roberta for all I give a fuck. We’re talking if they’re the first. They’re not.

      Are they the first to embed a LLM in an OS? Yes. A model with over x Bn params? Maybe, probably.

      But they ARE NOT the first to deploy gen AI on mobile.

      • evo
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        You’re just moving the goal posts. I ran an LLM on device in an Android app I built a month ago. Does that make me first to do it? No. They are the first to production with an actual product.

        • sciencesebi@feddit.ro
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          Lol, projecting. You started mentioning production and LLM out of the blue.

          I hope you work for Google, you should be paid for this amount of ass kissing