• ThorrJo@lemmy.sdf.orgOP
    link
    fedilink
    English
    arrow-up
    13
    ·
    6 days ago

    But what makes this AI model unique is that it’s lightweight enough to work efficiently on a CPU, with TechCrunch saying an Apple M2 chip can run it.

    An Apple M2 can run bigger, higher-precision models than this FWIW. More important than this is perhaps whether older CPUs can run it with acceptable performance.

    AI models are often criticized for taking too much energy to train and operate. But lightweight LLMs, such as BitNet b1.58 2B4T, could help us run AI models locally on less powerful hardware. This could reduce our dependence on massive data centers and even give people without access to the latest processors with built-in NPUs and the most powerful GPUs to use artificial intelligence.

    This is definitely relevant to my interests especially with NPU support for such models coming. Dirt cheap ARM-based PCs based on e.g. the RK3588 are shipping with small NPUs

  • mindbleach
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    It’s trinary, and I understand why they instead say “1-bit,” but it still bugs me that they call it “1-bit.”

    I’d love to see how low they can push this and still get spooky results. Something with ten million parameters could fit on a Macintosh Classic II - and if it ran at any speed worth calling interactive, it’d undercut a lot of loud complaints about energy use. Training takes a zillion watts. Using the model is like running a video game.

    • milicent_bystandr@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Can someone tell me what’s meant by,

      The repository describes bitnet.cpp as offering “a suite of optimized kernels that support fast and lossless inference of 1.58-bit models on CPU

      Does it mean you need to run your OS with a specific kernel from bitnet.cpp? Or is it a different kind of ‘kernel’?

      • mindbleach
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        I think they mean whatever’s handling the model. A program into which you feed this inherently restricted format, so it takes advantage of those limitations, in order to run more efficiently.

        Like if every number’s magnitude is 1 or 0, you don’t need to do floating-point multiplication.

  • BetaDoggo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 days ago

    Rather than CPUs I think these are a much bigger deal for GPUs where memory is much more expensive. I can get 128GB of ram for 300CAD, the same amount in vram would be several grand.

    • weker01
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      For a second, I was thinking about why you need 300 instances of CAD software and if 128GB isn’t a bit too small for that ludicrous amount of computer aided design.