Let’s talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

  • @planish
    link
    English
    11 year ago

    These are, indeed, pretty good, and quite coherent.

    • @Kerfuffle
      link
      English
      21 year ago

      I was pretty impressed by guanaco-65B, especially how it was able to remain coherent even way past the context limit (with llama.cpp’s context wrapping thing). You can see the second story is definitely longer than 2,048 tokens.