misk@sopuli.xyz to Technology@lemmy.worldEnglish · 3 days agoOpenAI and others seek new path to smarter AI as current methods hit limitationswww.reuters.comexternal-linkmessage-square15fedilinkarrow-up154arrow-down14cross-posted to: [email protected][email protected]
arrow-up150arrow-down1external-linkOpenAI and others seek new path to smarter AI as current methods hit limitationswww.reuters.commisk@sopuli.xyz to Technology@lemmy.worldEnglish · 3 days agomessage-square15fedilinkcross-posted to: [email protected][email protected]
minus-squareA_A@lemmy.worldlinkfedilinkEnglisharrow-up3·3 days ago… “Alibaba (LLM)” … is it this ? … ? Qwen2.5: A Party of Foundation Models! https://qwenlm.github.io/blog/qwen2.5/
minus-squarebrucethemoose@lemmy.worldlinkfedilinkEnglisharrow-up2·edit-23 days agoBTW, as I wrote that post, Qwen 32B coder came out. Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.
minus-squareA_A@lemmy.worldlinkfedilinkEnglisharrow-up2·3 days agoGreat news 😁🥂, someone should make a new post on this !
minus-squarebrucethemoose@lemmy.worldlinkfedilinkEnglisharrow-up2·3 days agoYep. 32B fits on a “consumer” 3090, and I use it every day. 72B will fit neatly on 2025 APUs, though we may have an even better update by then. I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.
… “Alibaba (LLM)” … is it this ? … ?
Qwen2.5: A Party of Foundation Models!
https://qwenlm.github.io/blog/qwen2.5/
BTW, as I wrote that post, Qwen 32B coder came out.
Now a single 3090 can beat GPT-4o, and do it way faster! In coding, specifically.
Great news 😁🥂, someone should make a new post on this !
Yep.
32B fits on a “consumer” 3090, and I use it every day.
72B will fit neatly on 2025 APUs, though we may have an even better update by then.
I’ve been using local llms for a while, but Qwen 2.5, specifically 32B and up, really feels like an inflection point to me.