• finley@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    17
    ·
    edit-2
    7 days ago

    With your first sentence, I can say you’re wrong. My 1997 era DX4-75 MHz ran redhat wonderfully. And SUSE, and Gentoo.

    As the rest? You don’t know what an AI/LLM would’ve looked like on a processor from the era. No one even thought of it then. That doesn’t mean it can’t run it. It just means you can’t imagine that.

    Fortunately, I do not lack imagination for what could be possible.

    • ᗪᗩᗰᑎ
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      With your first sentence, I can say you’re wrong.

      except i’m not wrong. the model they ran is 4 orders of magnitude smaller than even the smallest “mini” models that are generally available, see TinyLlama1.1B [1] or Phi-3 3.8B mini [2] to compare against. Most “mini” models range from 1 to about 10 Billion parameters, which makes running them incredibly inefficient on older devices.

      That doesn’t mean it can’t run it. It just means you can’t imagine that.

      but I can imagine it. in fact, I could have told you it would have needed a significantly smaller model in order to run at an adequate pace on older hardware. it’s not at all a mystery, its a known factor. i think it’s absolutely cool that they did it, but lets not pretend its more than what it is - a modern version of running Doom on non-standard hardware.

      [1] https://huggingface.co/TinyLlama/TinyLlama-1.1B-step-50K-105b

      [2] https://ollama.com/library/phi3:3.8b-mini-128k-instruct-q5_0

      [3] https://www.thirtythreeforty.net/posts/2019/12/my-business-card-runs-linux/