Text from them:

Calling all model makers, or would-be model creators! Chai asked me to tell you all about their open source LLM leaderboard:

Chai is running a totally open LLM competition. Anyone is free to submit a llama based LLM via our python-package 🐍 It gets deployed to users on our app. We collect the metrics and rank the models! If you place high enough on our leaderboard you’ll win money 🥇

We’ve paid out over $10,000 in prizes so far. 💰

Come to our discord and check it out!

https://discord.gg/chai-llm

Link to latest board for the people who don’t feel like joining a random discord just to see results:

https://cdn.discordapp.com/attachments/1134163974296961195/1138833170838589471/image1.png

  • @[email protected]
    link
    fedilink
    English
    11
    edit-2
    1 year ago

    Me at first: wow, that’s cool, I wonder how models are ranked

    Come to our discord and check it out!

    OK, bye

  • @[email protected]
    link
    fedilink
    English
    41 year ago

    At least (as far as I can tell) they appear to be ranking the models by human evaluation rather than “benchmarks”, which is closer to measuring the real-world performance.

    It would be interesting to consider the types of questions that users are posing. For example there is a difference between asking:

    • A surface-level fact-based question such as “what is …”

    • A creative question like “write a story/article about …” or “give me a list of possible talking points for a presentation on …”

    • A question about reasoning/understanding like “why do you think the word … is more popular than … when referring to …” or “explain why … is considered socially acceptable while … is not”

    • Anything coding-related

    Also, some models seem to do well at things that can be answered after one or two replies, but struggle to follow an argument if you try to go more in-depth or continue a conversation about a topic.

    • @noneabove1182OPM
      link
      English
      11 year ago

      Yeah it’s a step in the right direction at least, though now that you mention it doesn’t lmsys or someone do the same with human eval and side by side comparisons?

      It’s such a tricky line to walk between deterministic questions (repeatable but cheatable) and user questions (real world but potentially unfair)