• ffhein@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Awesome work! Going to try out koboldcpp right away. Currently running llama.cpp in docker on my workstation because it would be such a mess to get cuda toolkit installed natively…

    Out of curiosity, isn’t conda a bit redundant in docker since it already is an isolated environment?

    • noneabove1182OP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      Yes that’s a good comment for an FAQ cause I get it a lot and it’s a very good question haha. The reason I use it is for image size, the base nvidia devel image is needed for a lot of compilation during python package installation and is huge, so instead I use conda, transfer it to the nvidia-runtime image which is… also pretty big, but it saves several GB of space so it’s a worthwhile hack :)

      but yes avoiding CUDA messes on my bare machine is definitely my biggest motivation