Promising stuff from their repo, claiming “exceptional performance, achieving a [HumanEval] pass@1 score of 57.3, surpassing the open-source SOTA by approximately 20 points.”

https://github.com/nlpxucan/WizardLM

  • noneabove1182OPM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Looks like gpt4all supports it, thought it was based on llama for some reason going to have to give it a try

    • Kerfuffle
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It looks like a frontend that just bundles a bunch of stuff together. Oobabooga’s webui thing is similar: you can run stuff with llama.cpp, GPTQ, etc. What models and features are supported is going to depend on how the frontend manages that stuff. There are also forks of llama.cpp like koboldc++ which may support different models/features/formats (I know koboldc++ supports some older GGML file formats that llama.cpp broke compatibility with).

      • noneabove1182OPM
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Oh wait does ooba support this? Nvm then I’m enjoying using that, I’m just a little lost sometimes haha

        • Kerfuffle
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I don’t know if it does or doesn’t, I was just saying those two projects seemed similar: presenting a frontend for running inference on models while the user doesn’t necessarily have to know/care what backend is used.

          • noneabove1182OPM
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Gotcha, koboldcpp seems to be able to run it, all of it is only a tiny bit confusing :D