• tyler@programming.dev
      link
      fedilink
      arrow-up
      12
      ·
      3 days ago

      I think people are just happy that OpenAI is getting shit on, even if the reality isn’t really what is being portrayed. For example I’ve been trying to use r1-32b and it’s really no where near as good as Claude sonnet 3.5 has been.

      I stopped using openai so I can’t comment on the performance comparison there, but clearly the benchmarks are all just made up bs.

      • fossphi@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        I think another factor is that apparently the training of this LLM was done significantly cheaper than other mainstream models. Or that’s what I came across from other forum discussions. I don’t really care too much to bother myself and dig deeper