• @[email protected]
    link
    fedilink
    7
    edit-2
    9 months ago

    Imagine a standardized API where you provide either your own LLM running locally, your own LLM running in your server (for enthusiasts or companies), or a 3rd party LLM service over the Internet, for your optional AI assistant that you can easily disable.

    Regardless of your DE, you could choose if you want an AI assistant and where you want the model to run.

    • @[email protected]
      link
      fedilink
      49 months ago

      I’ve had this idea for a long time now, but I don’t know shit about LLMs. GPT can be run locally though, so I guess only the API part is needed.

      • @[email protected]
        link
        fedilink
        39 months ago

        I’ve run LLMs locally before, it’s the unified API for digital assistants that would be interesting to me. Then we’d just need an easy way to acquire LLMs that laymen could use, but probably any bigger DE or distro can create a setup wizard.

    • @[email protected]
      link
      fedilink
      29 months ago

      Check out koboldAI and koboldassistabt projects. That’s Litterally the thing you are describing and is Open source