I’ve been using llama.cpp, gpt-llama and chatbot-ui for a while now, and I’m very happy with it. However, I’m now looking into a more stable setup using only GPU. Is this llama.cpp still still a good candidate for that?

  • bia@lemmy.mlOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    Haven’t been able to test that out, but saw the change. Particularly interesting for my use case.

    • gh0stcassette@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 years ago

      What use case would that be?

      I can get like 8 tokens/s running 13b models in q_3_k_L quantization on my laptop, about 2.2 for 33b, and 1.5 for 65b (I bought 64gb of RAM to be able to run larger models lol). 7B was STUPID fast because the entire model fits inside my (8gb) GPU, but 7b models mostly suck (wizard-vicuna-uncensored is decent, every other one I’ve tried was Not).