South Korea says it's uncovered evidence that DeepSeek has secretly been sharing data with ByteDance, the parent company of popular social media app TikTok.
I dont really use LLMs so I didn’t even realize there were versions with different weights and stuff. I was using 7b, but found it pretty useless. Pretty sure I’m not going to be able run 32B on my rig. lmao.
I’m not @[email protected] However here’s a pretty barebones how to article to get you started. Just know it can be as complicated as you like. For starters you may want to stick to the 7b and 14b models like mistral:7b and phi4:14b as they’ll fit easily on your card and will allow you to test the waters.
Locally? Arcee 14B and the 14B Deepseek distill are currently the best models that fill fit.
I’d recommend hosting them with TabbyAPI instead of ollama, as they will be much faster and more VRAM efficient. But this is more fuss.
Honestly, I would just try free APIs like Gemini, Groq, and such through open web ui, or use really cheap APIs like openrouter. Newer 14B models are okay, but they’re definitely lacking that “encyclopedic intelligence” larger models have.
I dont really use LLMs so I didn’t even realize there were versions with different weights and stuff. I was using 7b, but found it pretty useless. Pretty sure I’m not going to be able run 32B on my rig. lmao.
guess ill continue being an LLMless pleb.
There are plenty of free LLM APIs you can use with something like Open Web UI, on any machine. I still use them myself.
Have you got any recs? I’ve got a 3080 in my machine atm
I’m not @[email protected] However here’s a pretty barebones how to article to get you started. Just know it can be as complicated as you like. For starters you may want to stick to the 7b and 14b models like mistral:7b and phi4:14b as they’ll fit easily on your card and will allow you to test the waters.
If you’re on Windows https://doncharisma.org/2024/11/23/self-hosting-ollama-with-open-webui-on-windows-a-step-by-step-guide/
If you’re using Linux https://linuxtldr.com/setup-ollama-and-open-webui-on-linux/
If you want a container https://github.com/open-webui/open-webui/blob/main/docker-compose.yaml
Locally? Arcee 14B and the 14B Deepseek distill are currently the best models that fill fit.
I’d recommend hosting them with TabbyAPI instead of ollama, as they will be much faster and more VRAM efficient. But this is more fuss.
Honestly, I would just try free APIs like Gemini, Groq, and such through open web ui, or use really cheap APIs like openrouter. Newer 14B models are okay, but they’re definitely lacking that “encyclopedic intelligence” larger models have.