It was multiple models, mainly 32-70B
There are many projects out there optimizing the speed significantly. Ollama is unbeaten in the convenience though
Yeah, but there are many open issues on GitHub related to these settings not working right. I’m using the API, and just couldn’t get it to work. I used a request to generate a json file, and it never generated one longer than about 500 lines. With the same model on vllm, it worked instantly and generated about 2000 lines
Über n Salamibrot geht halt nix
Yo I think we Path of Exile gamers made it pretty clear he is not one of us
Take a look at NVIDIA Project Digits. It’s supposed to release in May for 3k usd and will be kind of the only sensible way to host LLMs then:
Is it so hard for Americans to imagine that there could be places that are not directly next to a road? lol
Newer models totally can
How is Apple pretty bad?
I’ve discovered it just a few days ago and now use it on all my machines
For me it’s ARPGs, but specifically its Path of Exile. The visual stimulation combined with the dopamine hit when something good drops, plus a long term goal to make your character better. They release a new league every 3-4 months and I always pray it doesn’t consume my life for more than 2 weeks
For anyone trying this, make sure you do not have “- TS_USERSPACE=false” in your yaml from previous experimentation. After removing this, it works for me too.
In the documentation they say to add sysctl entries, it is possible in docker compose like so:
tailscale:
sysctls:
- net.ipv4.ip_forward=1
- net.ipv6.conf.all.forwarding=1
But it does not seem to make a difference for me. Does anyone know why these would not be required in this specific setup?
Thank you, really appreciate it!
Do you have any links/sources about this? I’m not saying you’re wrong, I’m just interested
Same for me! Happily used Nobara for the Season start of my main game Path of Exile and will never look back. Btw is it your main game too by any chance? I was also worried to switch because for that game you need some community tools to work properly. Had a few bugs but it all works perfectly now.
Am I crazy or are you just completely wrong?
https://github.com/waydabber/BetterDisplay/wiki/MacOS-scaling,-HiDPI,-LoDPI-explanation
I hope the Democrats win, but I have to point out that someone being discredited by rumors or misinformation is not suddenly ok just because it’s your side that benefits from it
Was the name gonna be “Three drunk Texans”?
Even the meme sites are better at clearing up misinformation now than most media, lol
I’ve read about this method in the GitHub issues, but to me it seemed impractical to have different models just to change the context size, and that was the point I started looking for alternatives