- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
cross-posted from: https://lemmy.intai.tech/post/40699
Models
Datasets
Repos
Related Papers
Credit:
Archive:
@Yampeleg The first model to beat 100% of ChatGPT-3.5 Available on Huggingface
🔥 OpenChat_8192
🔥 105.7% of ChatGPT (Vicuna GPT-4 Benchmark)
Less than a month ago the world witnessed as ORCA [1] became the first model to ever outpace ChatGPT on Vicuna’s benchmark.
Today, the race to replicate these results open-source comes to an end.
Minutes ago OpenChat scored 105.7% of ChatGPT.
But wait! There is more!
Not only OpenChat beated Vicuna’s benchmark, it did so pulling off a LIMA [2] move!
Training was done using 6K GPT-4 conversations out of the ~90K ShareGPT conversations.
The model comes in three versions: the basic OpenChat model, OpenChat-8192 and OpenCoderPlus (Code generation: 102.5% ChatGPT)
This is a significant achievement considering that it’s the first (released) open-source model to surpass the Vicuna benchmark. 🎉🎉
OpenChat: https://huggingface.co/openchat/openchat
OpenChat_8192: https://huggingface.co/openchat/openchat_8192 (best chat)
OpenCoderPlus: https://huggingface.co/openchat/opencoderplus (best coder)
Dataset: https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset
Congratulations to the authors!!
[1] - Orca: The first model to cross 100% of ChatGPT: https://arxiv.org/pdf/2306.02707.pdf [2] - LIMA: Less Is More for Alignment - TL;DR: Using small number of VERY high quality samples (1000 in the paper) can be as powerful as much larger datasets: https://arxiv.org/pdf/2305.11206
Seems like the model is too big to try for free on Huggingface. I guess I’ll wait until someone hosts this for others to try.
give it 1-2 weeks, someone will post a free one. and ill post it here.