So what is currently the best and easiest way to use an AMD GPU for reference I own a rx6700xt and wanted to run 13B model maybe superhot but I’m not sure if my vram is enough for that Since now I always sticked with llamacpp since it’s quiet easy to setup Does anyone have any suggestion?

  • Mechanize@feddit.it
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I’ve a rx 6650 xt and I generally use llama.cpp with the ROCm patch (tested up to commit ac7876ac20124a15a44fd6317721ff1aa2538806).

    It works great with around 25 layers moved to the GPU for my 8GB card. 18, if you want to do something else GPU related (like watching a HW Accelerated video).

    To be fair it’s a long time now that I don’t update llama.cpp and it had gone through a lot of changes in the meantime, like the addition of the LLAMA_CUDA_DMMV_X, LLAMA_CUDA_DMMV_Y and LLAMA_CUDA_KQUANTS_ITER parameters, so your mileage may vary and it’s possible you’ll have to manually modify the PR before merging it in, so not really an easy one click experience for the best performance.

    It currently doesn’t support SuperHot or similar techniques, mainly because there’s a really big push on new ones each day, and they are waiting to see which will be the real winner.

    But I went a bit too much off-topic. I think the easiest, as the other commenter said, is to just go with kobold.cpp. I personally didn’t have a good experience working with text-generation-webui, but a lot of people swear by it.

    • Mixel@feddit.deOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Yes thank you for the information I really appreciate it! I decided to go for kobold.cpp for the meantime with CLBlast which works just overall way better than standart CPU inference. But im looking into the ROCm LLamacpp support which I am currently trying.

    • actually-a-cat
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Not sure what happened to this comment… Anyway, ooba (text-generation-webui) works with AMD on Linux but ROCm is super jank at the best of times and 6700XT is not officially supported so it might be hopeless.

      llama.cpp has some GPU acceleration support on AMD in CLBlast mode, if you aren’t already using it, might be worth trying.

      • Mixel@feddit.deOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        How do use ooba with rocm I looked at the python file where you can install amd and it will just say “amd not supported” and exit. I guess it just doesn’t update the webui.py when I update ooba? I somewhere heard that llama.cpp with CLBlast wouldn’t work with ooba, or am I wrong? Also is konoldcpp worth a shot? I hear some success with it

        • actually-a-cat
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          I can recommend kobold, it’s a lot simpler to set up than ooba and usually runs faster too.

          • Mixel@feddit.deOP
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 year ago

            I will try that once in home! Ty for the suggestions can I use kobold also in sillytavern? iirc there was an option for koboldai or something is that koboldcpp or what does that option do?

            EDIT: I got it working and its wonderful thank you for suggesting me this :) I had some difficulties setting it up especially with opencl-mesa since I had to install opencl-amd and then finind out the device ID and so on but once it was working its great!

  • saplingtree@kbin.social
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Just pay nvidia their ill-earned ounce of flesh. I say this as a strong AMD advocate.

    It’s clear that AMD isn’t serious about the AI market. They had years to provide a proper competitor to CUDA or at the very least a 1:1 compatibility layer. Instead of doing either of these things, AMD continued messing with half-assed projects like ROCm and the other one the name of which I don’t care to look up. AMD has the resources to build a CUDA compatible API in under 6 months but for some reason they don’t. I don’t know why they don’t, and at this point I don’t really care.

    Buy an AMD GPU for AI at your own risk.