A new Feature Drop brings new updates to make your hardware more helpful, productive and personalized. Plus, Gemini Nano now powers on-device generative AI features for Pixel 8 Pro.
MLC LLM does the exact same thing. Lots of apps have low quality LLMs embedded in chat apps. Low res image generation apps via diffusion models similar to DallE mini have been around a while.
Who said about production and non-garbage? We’re not talking quality of responses or spread. You can use distilled roberta for all I give a fuck. We’re talking if they’re the first. They’re not.
Are they the first to embed a LLM in an OS? Yes. A model with over x Bn params? Maybe, probably.
But they ARE NOT the first to deploy gen AI on mobile.
You’re just moving the goal posts. I ran an LLM on device in an Android app I built a month ago. Does that make me first to do it? No. They are the first to production with an actual product.
How much of that is really built-in vs. offloaded to their cloud then cached locally (or just not usable offline, like Assistant)?
Why would that matter?
That’s the entire point. Running the LLM on device is what’s new here…
MLC LLM does the exact same thing. Lots of apps have low quality LLMs embedded in chat apps. Low res image generation apps via diffusion models similar to DallE mini have been around a while.
Also Qualcomm Used its AI stack to deploy SD to mobile back in February. And this is not the low res one.
Think before you write.
I can’t find a single production app that uses MLC LLM (because of the reasons I listed earlier (like multi GB models that aren’t garbage).
Qualcomm announcement is a tech demo and they promised to actually do it next year…
Who said about production and non-garbage? We’re not talking quality of responses or spread. You can use distilled roberta for all I give a fuck. We’re talking if they’re the first. They’re not.
Are they the first to embed a LLM in an OS? Yes. A model with over x Bn params? Maybe, probably.
But they ARE NOT the first to deploy gen AI on mobile.
You’re just moving the goal posts. I ran an LLM on device in an Android app I built a month ago. Does that make me first to do it? No. They are the first to production with an actual product.
Lol, projecting. You started mentioning production and LLM out of the blue.
I hope you work for Google, you should be paid for this amount of ass kissing
Services running in GCP aren’t built into the phone, which is kinda the main point of the statement you took issue with.
What does that have to do with CACHING? That’s client server.
No clue what you’re talking about