actually-a-cat

  • 3 Posts
  • 13 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle




  • Small update, take what I said about the breakage at 6000 tokens with a pinch of salt, testing is complicated by something somewhere breaking in a way that persists through generations and even kobold.cpp restarts… Must be some driver issue with CUDA because it takes a PC reboot to resolve, then the exact same generation goes from gibberish to correct.






  • actually-a-cattoLocalLLaMAVicuna-33B-1-3-SuperHOT-8K-GPTQ
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    That’s what llama.cpp and kobold.cpp do, the KV cache is the last thing that gets offloaded so you can offload weights and keep the cache in RAM. Although neither support SuperHOT right now.

    MQA models like Falcon-40B or MPT are going to be better for large context lengths. They have a tiny KV cache so even blown up 16x it’s not a problem.







  • The wizard-vicuna family is my favorite, they successfully combine lucidity with creativity. Wizard-vicuna-30b is competitive with guanaco-65b in most cases while being subjectively more fun. I hope we get a 65b version, or a Falcon 40B one

    I’ve been generally unimpressed with models advertised as good for storytelling or roleplay, they tend to be incoherent. It’s much easier to get wizard-vicuna to write fluent prose than it is to get one of those to stop mixing up characters or rules. I think there might be some sort of poison pill in the Pygmalion dataset, it’s the common factor in all the models that didn’t work well for me.