Let’s talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

  • @Kerfuffle
    link
    English
    41 year ago

    guanaco-65B is my favorite. It’s pretty hard to go back to 33B models after you’ve tried a 65B.

    It’s slow and requires a lot of resources to run though. Also, not like there are a lot of 65B model choices.

    • @planish
      link
      English
      3
      edit-2
      1 year ago

      What do you even run a 65b model on?

      • @Kerfuffle
        link
        English
        71 year ago

        With a quantized GGML version you can just run on it on CPU if you have 64GB RAM. It is fairly slow though, I get about 800ms/token on a 5900X. Basically you start it generating something and come back in 30minutes or so. Can’t really carry on a conversation.

        • @planish
          link
          English
          31 year ago

          Is it smart enough that it can get the thread of what you are looking for without as much rerolling or handholding, so this comes out better?

          • @Kerfuffle
            link
            English
            31 year ago

            That’s the impression I got from playing with it. I don’t really use LLMs for anything practical, so I haven’t done anything too serious with it. Here’s are a couple examples of having it write fiction: https://gist.github.com/KerfuffleV2/4ead8be7204c4b0911c3f3183e8a320c

            I also tried with plain old llama-65B: https://gist.github.com/KerfuffleV2/46689e097d8b8a6b3a5d6ffc39ce7acd

            You can see it makes some weird mistakes (although the writing style itself is quite good).

            If you want to give me a prompt, I can feed it to guanaco-65B and show you the result.

            • @planish
              link
              English
              11 year ago

              These are, indeed, pretty good, and quite coherent.

              • @Kerfuffle
                link
                English
                21 year ago

                I was pretty impressed by guanaco-65B, especially how it was able to remain coherent even way past the context limit (with llama.cpp’s context wrapping thing). You can see the second story is definitely longer than 2,048 tokens.