• exu@feditown.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      You’ll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.