• chiisana
    link
    fedilink
    English
    232 months ago

    What’s the resources requirements for the 405B model? I did some digging but couldn’t find any documentation during my cursory search.

    • @[email protected]
      link
      fedilink
      English
      40
      edit-2
      2 months ago

      Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model. Ouch.

      Edit: you can try quantizing it. This reduces the amount of memory required per parameter to 4 bits, 2 bits or even 1 bit. As you reduce the size, the performance of the model can suffer. So in the extreme case you might be able to run this in under 64GB of graphics RAM.

      • @[email protected]
        link
        fedilink
        English
        202 months ago

        Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model.

          • bruhduh
            link
            fedilink
            English
            12 months ago

            https://www.ebay.com/p/116332559 lga2011 motherboards quite cheap, insert 2 xeon 2696v4 44 threads each totalling at 88 threads and 8 ddr4 32gb sticks, it comes quite cheap actually, you can also install Nvidia p40 with 24gb each, you can max out this build for ai for under 2000$

        • chiisana
          link
          fedilink
          English
          22 months ago

          Finally! My dumb dumb 1TB ram server (4x E5-4640 + 32x32GB DDR3 ECC) can shine.

      • @[email protected]
        link
        fedilink
        English
        8
        edit-2
        2 months ago

        At work we habe a small cluster totalling around 4TB of RAM

        It has 4 cooling units, a m3 of PSUs and it must take something like 30 m2 of space

      • TipRing
        link
        fedilink
        English
        42 months ago

        When the 8 bit quants hit, you could probably lease a 128GB system on runpod.

      • @[email protected]
        link
        fedilink
        English
        32 months ago

        Can you run this in a distributed manner, like with kubernetes and lots of smaller machines?

      • @[email protected]
        link
        fedilink
        English
        22 months ago

        According to huggingface, you can run a 34B model using 22.4GBs of RAM max. That’s a RTX 3090 Ti.

      • arefx
        link
        fedilink
        English
        12 months ago

        Ypu mean my 4090 isn’t good enough 🤣😂

      • @[email protected]
        link
        fedilink
        English
        1
        edit-2
        2 months ago

        Hmm, I probably have that much distributed across my network… maybe I should look into some way of distributing it across multiple gpu.

        Frak, just counted and I only have 270gb installed. Approx 40gb more if I install some of the deprecated cards in any spare pcie slots i can find.

    • Blaster M
      link
      fedilink
      English
      62 months ago

      As a general rule of thumb, you need about 1 GB per 1B parameters, so you’re looking at about 405 GB for the full size of the model.

      Quantization can compress it down to 1/2 or 1/4 that, but “makes it stupider” as a result.