• evo
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    5
    ·
    edit-2
    8 months ago

    This is a weird hit piece where the author has all the facts but seems to just not understand them…

    The TL;DR (that we knew months ago) is the 8GB of ram in Pixel 8 isn’t enough to run the Gemini Nano LLM on device, the 12GB of ram in the 8 Pro is actually needed.

    Again, strange to call this a “fail” when Google is the first and only company to really do this and never claimed it was coming to the 8. This is the bleeding edge, of course it only runs on highest spec hardware.

    • redcalcium@lemmy.institute
      link
      fedilink
      English
      arrow-up
      16
      ·
      8 months ago

      Another article point out that the base Galaxy S24, which also has 8gb RAM, is able to run Gemini Nano.

      That said, I will point out that the base Galaxy S24 model comes with 8GB of RAM. Only the Plus and Ultra feature 12GB of memory. Google didn’t exclude the regular Galaxy S24 model from the list of Gemini Nano devices. Then again, the Galaxy S24 series features a much better chipset than the Pixel 8 phones. Maybe that’s what allows a device with 8GB of RAM to run on-device AI via Gemini Nano.

    • limerod@reddthat.comOPM
      link
      fedilink
      English
      arrow-up
      16
      ·
      8 months ago

      This is a weird hit piece where the author has all the facts but seems to just not understand them…

      Actually he does.

      The TL;DR (that we knew months ago) is the 8GB of ram in Pixel 8 isn’t enough to run the Gemini Nano LLM on device, the 12GB of ram in the 8 Pro is actually needed.

      Not quite. If you read further he writes:

      The Pixel 8 is fitted with the same Tensor G3 chip fitted to the Pixel 8 Pro, but where the latter has 12 GB of RAM, the Pixel 8 is stuck with just 8 GB of RAM, which appears to be the cause of the system bottleneck on this occasion. This is still somewhat surprising as Gemini Nano comes in two model sizes; one running at just 1.8 billion parameters and the other running at 3.6 billion parameters. It is unclear which of the two models is running on the Pixel 8 Pro, but at least it can support Google’s first on-device mobile LLM.

      As far as on-device AI models go both are relatively modest, however. Qualcomm, which has just launched a new AI Hub with over 75 AI models compatible with its Snapdragon chips, has highlighted that its Snapdragon 8 Gen 3 can support AI models of up to 10 billion parameters. Even its Snapdragon 8 Gen 2 can support AI modes running at up to 7 billion parameters.

      This is significant for two reasons. Firstly, the greater the parameters, potentially the more sophisticated and accurate the model. Secondly, it highlights that it is not just system RAM where the Pixel 8 suffers, but the Tensor G3 is, as we have previously examined, far from being the AI champ that Google’s marketing would lead us to believe. Google has pitched the Pixel series as being all about the AI and defended the Tensor for its lack of outright performance one the pretense that outright performance is less important than AI capability.

      The tensor is not strong enough in the 1st place compared to snapdragon 8 gen 3/2

      Again, strange to call this a “fail” when Google is the first and only company to really do this and never claimed it was coming to the 8.

      True, If you don’t count the s24 series. The base model which also has 8GB of limited ram.

      The author’s point written in the article if you read… being tensor was made for pixels due to AI demands and its weak performance being ok because it had AI chops. Instead, not only is it not getting some advanced AI based features it’s not even equal compared to the competition (s24 series). If Pixel 8, which was launched in October is missing out so early. What else will it miss over the 7 years?

    • Deceptichum
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      8 months ago

      So Gemini Nano needs 12gb but Gemma can run on 4gb? Why don’t they just use their Gemma model on this low end device.