Let’s talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

  • actually-a-cat
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    The wizard-vicuna family is my favorite, they successfully combine lucidity with creativity. Wizard-vicuna-30b is competitive with guanaco-65b in most cases while being subjectively more fun. I hope we get a 65b version, or a Falcon 40B one

    I’ve been generally unimpressed with models advertised as good for storytelling or roleplay, they tend to be incoherent. It’s much easier to get wizard-vicuna to write fluent prose than it is to get one of those to stop mixing up characters or rules. I think there might be some sort of poison pill in the Pygmalion dataset, it’s the common factor in all the models that didn’t work well for me.

      • actually-a-cat
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        W-V is supposedly trained for “USER:/ASSISTANT:” but I’ve found it flexible and able to work with anything that’s consistent. For creative writing I’ll often do “USER:/STORY:”. More than two such tags also work, e.g. I did a rpg-style thing with three characters plus an omniscient narrator, by just describing each of them with their tag in the prompt, and it worked nearly flawlessly. Very impressive actually.

  • Equality_for_apples
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    I’ve been doing RP with Wizard-Vicuna 13b Uncensored. it’s good, very fast (ggml v3 q5-k-s variant) But sometimes forgets it’s roleplaying and spits out a story

  • Kerfuffle
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    guanaco-65B is my favorite. It’s pretty hard to go back to 33B models after you’ve tried a 65B.

    It’s slow and requires a lot of resources to run though. Also, not like there are a lot of 65B model choices.

    • planish
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      What do you even run a 65b model on?

      • Kerfuffle
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        With a quantized GGML version you can just run on it on CPU if you have 64GB RAM. It is fairly slow though, I get about 800ms/token on a 5900X. Basically you start it generating something and come back in 30minutes or so. Can’t really carry on a conversation.

        • planish
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Is it smart enough that it can get the thread of what you are looking for without as much rerolling or handholding, so this comes out better?

          • Kerfuffle
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            That’s the impression I got from playing with it. I don’t really use LLMs for anything practical, so I haven’t done anything too serious with it. Here’s are a couple examples of having it write fiction: https://gist.github.com/KerfuffleV2/4ead8be7204c4b0911c3f3183e8a320c

            I also tried with plain old llama-65B: https://gist.github.com/KerfuffleV2/46689e097d8b8a6b3a5d6ffc39ce7acd

            You can see it makes some weird mistakes (although the writing style itself is quite good).

            If you want to give me a prompt, I can feed it to guanaco-65B and show you the result.

            • planish
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              These are, indeed, pretty good, and quite coherent.

              • Kerfuffle
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 year ago

                I was pretty impressed by guanaco-65B, especially how it was able to remain coherent even way past the context limit (with llama.cpp’s context wrapping thing). You can see the second story is definitely longer than 2,048 tokens.

  • dtlnx@beehaw.orgOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I’d have to say I’m very impressed with WizardLM 30B (the newer one). I run it in GPT4ALL, and even though it is slow the results are quite impressive.

    Looking forward to Orca 13b if it ever releases!

    • micheal65536@lemmy.micheal65536.duckdns.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Which one is the “newer” one? Looking at the quantised releases by TheBloke, I only see one version of 30B WizardLM (in multiple formats/quantisation sizes, plus the unofficial uncensored version).

  • Yahma@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Guanaco, WizardLM (uncensored) and Camel-13b have been the best models I’ve tried that are 13b+.

    Surprisinly, the LaMini-LM (Flan 3b) and OpenLlama (3b) have performed very well for smaller models.

  • VraethrDalkr
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    WizardLM 30B v1.0: works great with long prompts, coherent and creative.

    Airoboros (13B and 33B): performs very well on my benchmarks, accurate but more predictable than WizardLM.

    Chronos-Hermes 13B: excellent at role-play with SillyTavern. Very creative and coherent for generating characters dialogue. It never ceases to amaze me. Scores average on benchmarks.

    Edit: Sorry for spamming the same comment. It looks like my voyager app glitched out.