Promising stuff from their repo, claiming “exceptional performance, achieving a [HumanEval] pass@1 score of 57.3, surpassing the open-source SOTA by approximately 20 points.”

https://github.com/nlpxucan/WizardLM

  • noneabove1182OPM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Oh wait does ooba support this? Nvm then I’m enjoying using that, I’m just a little lost sometimes haha

    • Kerfuffle
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I don’t know if it does or doesn’t, I was just saying those two projects seemed similar: presenting a frontend for running inference on models while the user doesn’t necessarily have to know/care what backend is used.

      • noneabove1182OPM
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Gotcha, koboldcpp seems to be able to run it, all of it is only a tiny bit confusing :D