• jpablo68@infosec.pub
    link
    fedilink
    English
    arrow-up
    10
    ·
    18 days ago

    I just want a portable self hosted LLM for specific tasks like programming or language learning.

    • plixel@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      ·
      18 days ago

      You can install Ollama in a docker container and use that to install models to run locally. Some are really small and still pretty effective, like Llama 3.2 is only 3B and some are as little as 1B. It can be accessed through the terminal or you can use something like OpenWeb UI to have a more “ChatGPT” like interface.

      • cybersandwich@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        I have a few LLMs running locally. I don’t have an array of 4090s to spare so I am limited to the smaller models 8B and whatnot.

        They definitely aren’t as good as anything you get remotely. It’s more private and controlled but it’s much less useful (I’ve found) than any of the other models.