• WolfLink
    link
    fedilink
    English
    arrow-up
    20
    ·
    4 days ago

    You can run your own LLM chatbot with https://ollama.com/

    They have some really small ones that only require like 1GB of VRAM, but you’ll generally get better results if you pick the biggest model that fits on your GPU.