These are the full weights, the quants are incoming from TheBloke already, will update this post when they’re fully uploaded

From the author(s):

WizardLM-70B V1.0 achieves a substantial and comprehensive improvement on coding, mathematical reasoning and open-domain conversation capacities.

This model is license friendly, and follows the same license with Meta Llama-2.

Next version is in training and will be public together with our new paper soon.

For more details, please refer to:

Model weight: https://huggingface.co/WizardLM/WizardLM-70B-V1.0

Demo and Github: https://github.com/nlpxucan/WizardLM

Twitter: https://twitter.com/WizardLM_AI

GGML quant posted: https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GGML

GPTQ quant repo posted, but still empty (GPTQ is a lot slower to make): https://huggingface.co/TheBloke/WizardLM-70B-V1.0-GPTQ

  • ffhein
    link
    fedilink
    English
    410 months ago

    Me a few months ago when upgrading my computer: pff, who needs 64GB of RAM? Seems like a total waste

    Me after realising you can run LLM at home: cries

  • @AsAnAILanguageModel
    link
    English
    210 months ago

    Tried the q2 ggml and it seems to be very good! First tests make it seem as good as airoboros, which is my current favorite.

    • @noneabove1182OPM
      link
      English
      110 months ago

      agreed, it seems quite capable, i haven’t tested all the way down to q2 to verify but i’m not surprised