Hey all! This is my first post, so I’m sorry if anything is formatted incorrectly or if this is the wrong place to ask this. Recently I’ve saved up enough to upgrade my graphics card ($350 budget). I’ve heard great things about amd on linux and appreciate open source drivers so as to not be at the mercy of nvidia. My first choice of graphics card was a 6700xt, but then I heard that nvidia had significantly higher performance in terms of workstation tasks (not to mention the benefits of cuda and nvenc) and have been looking into a 3060 or 3060 ti. I do a bit of gaming in my free time, but its not my top priority, and I can almost guarantee that any option in this price range will be more than enough for the games I play. Ultimately my questions come down to:

  1. Would nvida or amd provide more raw performance on linux for my price range?
  2. Which would be better for productivity cuda encoding etc. (I mainly use blender, freecad, and solidworks, but I appreciate having extra features for any software that I may use in the future).
  3. What option would work best after a few years? (I’ve seen amd increase rheir performance with driver updates before, but the nvk driver also looks promising. I also host some servers and tend to cycle my componenta from my main system into my proxmox cluster).

Also a bit more details to hopefully help with any missing info: My current system is a Ryzen 7 3700x, gtx 1050 ti, 32gb ram, 850 watt psu, and nvme ssd. I’ve only ever used nvidia cards, but amd looks like a great alternative. As another side note, if there’s any way to run cuda apps on amd I plan on running my new gpu alongside my old one so nvenc is not too much of a concern.

Thanks in advance for any thoughts or ideas!

Edit 1: thanks so much for all of the feedback! I’m not going to purchase a gpu quite yet but probably in a few weeks. First I’ll be testing wayland with my 1050 ti and just researching how much I need each feature of each gpu. Thanks again for all of your feedback, I’ll update the post when I do order said gpu.

Edit 2: I made an interesting decision and actually got the arc a770. I’d be happy to discuss exactly why, and some of the pros and cons so far, but I do plan on eventually compiling a more in depth review somewhere sometime.

  • @[email protected]
    link
    fedilink
    208 months ago

    I am assuming you currently use linux. Do you currently use CUDA with freecad and solidworks(which I am assuming you use through WINE or a VM). AMD generally has better raw performance at same price but has nothing equivalent to CUDA at this point. There is ROCm and plans for CUDA through ROCm but GPU support for ROCm is hit or miss. You also have openCL but performance is nowhere near as good as using CUDA even if the GPU using CUDA is weaker. AMD. will provide a much better gaming and day to day usage experience though

    • @neogeoOP
      link
      28 months ago

      I dual boot debian and arch with debian being primarily for workstation tasks and arch being for gaming and any software I want a more recent version of (kicad). It sounds like freecad is mostly cpu bound, and I haven’t used solidworks at all yet (I may take a mechanical engineering class where they’ll be using it). Considering amd is higher performing in raw power can ROCm be good enough to work as I wait for it to catch up to cuda?

      • @[email protected]
        link
        fedilink
        18 months ago

        There is also an availability problem. Only select few AMD GPUs support ROCm. There are ways to get it for unsupported GPUs but I don’t use ROCm so I don’t have much idea of how that works. You will have to ask somebody else about that. My point is if your need for CUDA is sure and solid, but an nvidia GPU. Also do check out if solidworks can be made to run on linux because it may not work using wine and there is no native version.

  • lemmyvore
    link
    fedilink
    English
    108 months ago

    “Raw” performance is going to be similar.

    For Blender you definitely want Nvidia.

    For games you can go either way, especially if it’s not your main goal.

    AMD being open source is a mixed bag and not as clear-cut as it should be. They’re notorious for being slow to fix bugs for example so for any card you’re going to have to check how recent it is, the more recent the more bugs still around. (Yes being open source means anybody can write bug fixes, but they can’t force AMD to get off their ass and test and accept the fixes…) The drivers being open does make some interesting things possible but not enough that you’re going to see a huge difference in every day use.

    Since it’s your potentially first AMD card after using only Nvidia I strongly urge you (if you get AMD) to buy from somewhere with a good return policy, test everything, and return if something is not ok.

    At my last attempt to switch to AMD a couple of years ago I ran into a bug that prevented my monitors from going to sleep, to give you an example. I know it’s anecdotal and a poor sample of one card model on one particular distro etc. but it is the kind of stuff that happens.

    Other than that you’re going to see a lot of opinionated discussions about AMD bring open source vs Nvidia refusing to, which often veer into ideology. Don’t get drawn into that, get a card that’s good for you and test it thoroughly.

    • @neogeoOP
      link
      18 months ago

      I completely agree that I should test it and do more research. Fortunately, a friend of mine has the 6700xt so I’ve asked him to test some of my most important softwares out on it (meshroom, blender, freecad). I also have said open source ideology, but I’ve got the mindset of if I get this gpu and support is dropped for it in say 10 years, how usable will it be at that point?

  • Lettuce eat lettuce
    link
    fedilink
    98 months ago

    I’ve been running a 6700xt for the last year and a half and it’s been great! Plays everything I want at high/ultra 1080p, anywhere from 160-240FPS depending on the game and settings.

    I record gameplay no problem too with OBS. I’m on Nobara Linux, a gaming-focused Fedora Distro, haven’t had a single issue so far with it.

  • @[email protected]
    link
    fedilink
    English
    68 months ago

    Keep in mind that nvidia drops proprietary driver support for its older cards from time to time, so your card will eventually be desupported by them. The extent to which this matters for you depends on how long a timespan your “after a few years” represents. If “a few years” is just 2-3 years, you’re probably okay, but if it’s 8-10 years, your card will be desupported before you’re ready to get rid of it.

    CUDA is a proprietary nvidia API, so you aren’t likely to get it working on an AMD card.

    • @neogeoOP
      link
      28 months ago

      Yeah, my use case is definately more in the 10+ years range lol I’ve only recently learned about rocm and hip for amd which may show promise as well. Do you think nvk will have matured more by then?

      • @[email protected]
        link
        fedilink
        18 months ago

        I just read today that the newest version of ROCm (5.7.1) supports the AMD Radeon RX 7900 XTX, the first consumer GPU to have official support in a long time. That one is about three times your budget, so there is no way to get an officially supported one. Reportedly some unsupported models work too, but I’d say you’re looking at a lot of hurdles here.

  • eshep
    link
    fedilink
    58 months ago

    @neogeo I think you may be on the right track with grabbing a newer AMD card, and keeping your old nv one just for the encoding stuff if you absolutely need it. I only do quite a bit of small drawing (mostly technical) in both blender and FreeCAD, as well as some occasional video editing in blender. I’ve had a RX5600XT since before we had proper drivers for it, and I’ve had no issues with it ever since they were in testing.

    • @[email protected]
      link
      fedilink
      English
      38 months ago

      I’ll preface this with I don’t do any workstation-tasks that are being mentioned here, I can only speak from a regular desktop/slight gaming user but…

      I’d agree with this take. I have an Nvidia 2080 right now, and at the start of the new month I’m looking to try to pickup a 6700XT (I have a low budget as well, so its about the best I can shoot for) because I’ve hit my limit with Nvidia’s shitty Linux support. An X11 session feels like crap because the desktop itself doesn’t even seem like it renders at 60 FPS (or rather, not consistently, but with a ton of framedrops) - and I only have two 1080p 60hz displays… should be easy for this card. A Wayland session feels “smooth”, but is glitchy as hell due to a multitude of reasons. It is only just now (as of the 17th IIRC) when they’ve released their 545 beta driver that Night Light/Night Color is finally working on a Wayland session, because they lacked GAMMA_LUT support in their driver… But now XWayland apps feel even worse because of this problem. This is not going to be fixed until either Nvidia moves their driver to using implicit sync, which won’t happen - or they actually manage to convince everyone to move over to supporting explicit sync, which requires the proposal being accepted into the standard (something that will take a while), and all compositors being updated to support it.

      I am on the opposite side of OP, I don’t do any sort of rendering/encoding but I spend a fair amount of time gaming. The XWayland issue in particular is basically the deal breaker since still most things use XWayland.

      While I do hear that Nvidia is the choice for anything that needs NVENC or CUDA, using the desktop side of things will feel horrible if you go with an Nvidia card as your primary, and you’ll constantly be trying to chase workarounds that only make it slightly better.

      I’d really rather not spend money on a new GPU right now as a friend gave me his old 2080 that I’m using at the beginning of the year, specifically because money has been really tight for me - but when you try to use your PC (and I work from home, so that’s a major factor) and you feel like you’re constantly having to fight it every. single. day just to do the basics, well… enough is enough. I’ve heard some Nvidia users say that it works perfectly fine for them, and that’s fantastic - but that has not been remotely close to my experience. It’s just compromise after compromise after compromise. I hope that the NVK driver will change things for non-workstation workflows (since I don’t imagine you’d be able to use NVENC/CUDA with NVK) but the driver isn’t ready for production use as far as I understand.

      At the very least, if you’re able to keep both your AMD card as your primary, and just add in the Nvidia GPU then you can use Nvidia’s PRIME offloading feature to run applications specifically on the Nvidia GPU. This has… its own fair share of problems from what I’ve heard, but the issues I’ve seen in passing have generally been on the gaming side, I’m not 100% sure how it does for things like NVENV/CUDA. Sadly for me, I don’t believe my case/board actually has enough space for both GPUs to be in there, and even if it did, it certainly wouldn’t have enough room with my extra PCI-E WiFi adapter in there - but that’s a bridge to cross when I get there, I suppose.

      I guess my conclusion for the OP is, how is your current desktop experience with your 1050TI? If it hasn’t been a hindrance for you, then perhaps you’re fine with your current plan - but as the Linux ecosystem starts to move more towards Wayland being the only realistic option to use, I do fear that Nvidia users are going to suffer a whole lot until Nvidia can finally get their act together… but I suspect there will be a massive lag time between the two.

      • @neogeoOP
        link
        28 months ago

        With most of my nvidia cards, X11 works great most of the time, but wayland is sketchy in most scenarios, and sometimes just won’t boot at all on my gtx 670. I haven’t used wayland as much as I’ve used X11(I use wayland on most of my systems with igpus), and while I don’t do a ton of gaming, I do use, and love experimenting with linux. It sounds like amd may provide a smoother desktop for linux so ill need to take that into account as well for a gpu upgrade.

    • @neogeoOP
      link
      18 months ago

      At the moment I’m torn between getting an nvidia card and waiting for nvk to be developed, or getting an amd card and waiting for ROCm to be developed. As a side note, I realized while I will still hold onto my 1050 ti, I may not have enough pcie lanes to run said new gpu at full 16x and instead may put my 1050 ti in one of my proxmox nodes (maybe use it for a blender cluster idk). How have freecad and blender been with the 5600xt? I’m just wondering if amd may be a better long term option because of its raw power and already existing open source drivers.

      • eshep
        link
        fedilink
        1
        edit-2
        8 months ago

        @neogeo It’s been excellent, but again, I’m not doing very heavy work with it. Although, if I do play around with large models, it has no problem redering em. And games such as Star Citizen, Starfield, and Cyberpunk 2077, all run fantastic when turned up to 11.

  • @[email protected]
    link
    fedilink
    48 months ago

    As much as I want to say AMD because of the open source drivers (I’ve also never had one, but my next card is definitely going to be an AMD one), you mentioned Blender, and last I check Nvidia’s GPUs are much more performant in Blender. Here’s a benchmark https://www.pugetsystems.com/solutions/3d-design-workstations/blender/hardware-recommendations/ in there you can see that a 3060 has slightly worse performance than a 7900xtx and considerable better performance than a 6900xt, you’re talking about getting a 6700xt so the difference will be even larger. So if Blender is your primary use case I would go with NVIDIA.

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      8 months ago

      I’m not going to disagree, just add to what you wrote.

      While it’s true that AMD’s HIP is nowhere near as powerful as either CUDA or OptiX, my 6750xt is about as performant as my previous 2060 Super, and definitely not unusable. The single greatest performance hog in Cycles is actually the viewport denoiser because it runs on the CPU (as opposed to Optix, which runs on the GPU), and runs on every frame.

      There is an additional issue with Eevee: complex shaders take forever to compile. It’s an issue with Mesa and there is already a patch that will likely be included in the next release.

      Still, as painful as it is, nvidia has better performance and usability in Blender at present.

  • @[email protected]
    link
    fedilink
    3
    edit-2
    8 months ago

    For 350€, you can buy yourself a Radeon Pro workstation card. I have a W6600 in my Linux workstation, works great with Blender. For about 80€ more, you can buy yourself the current gen AMD entry workstation card: W7500. The W6600 is sufficient too, it’s the last-gen high-end workstation card nevertheless.

    I personally prefer Team Red on Linux. Much better driver support.

  • @[email protected]
    link
    fedilink
    3
    edit-2
    8 months ago

    It may depend on how highly you value your software freedom and the benefits that come with it. Even if the performance per dollar for GPU tasks on blender was 25% worse, personally I would still go for the one with the free driver.

  • @[email protected]
    cake
    link
    fedilink
    28 months ago

    I can only comment on #2 freecad and Solidworks probably will never make uae of cuda in a meaningful way.

        • Ook the Librarian
          link
          fedilink
          3
          edit-2
          8 months ago

          On a well-thought-out post about a dilemma, you post ‘try third thing’. how was I supposed to know that wasn’t a joke?

          • @neogeoOP
            link
            1
            edit-2
            8 months ago

            Hey, I’m assuming you both were referring to intel arc? I’ve looked into the a770, and with 16 gb of vram and some impressive specs would it be a good option? I’ve heard intel seems to be slightly more active with tools like oneapi than amd is with ROCm.

  • danielfgom
    link
    fedilink
    English
    08 months ago

    Nvidia sucks. Sell that card and get AMD. Especially if you’re on Linux

    • @neogeoOP
      link
      18 months ago

      It looks like 1050 tis go for around $50-$90 which may allow me to get a 4060 ti or a 7700xt. How well does ROCm work with amd cards? Would ROCm work well enough in blender to contest a 4060 ti with cuda? Can existing cuda software be run with ROCm without developers porting it?

  • @[email protected]
    link
    fedilink
    -18 months ago

    Nvidia is not nearly has bad on Linux as people say and Radeon isn’t nearly as great as people say.

    For your usecase I would 100% go with an Nvidia GPU. It will work so much better with Blender. It will also work better with other workstation tasks like video encoding and AI/ML. AMD’s open source driver doesn’t support the AMF encoder, so you’d have to use their proprietary driver (and lose the benefits of the open source one that everyone raves about) and ROCm is improving, but it’s so far behind CUDA it will end up holding you back for AI/ML compute tasks.

    • @neogeoOP
      link
      28 months ago

      I’m new to ROCm and HIP, do you think that they’ll improve over time? Does amd have an existing implementation for any cuda software or must developers port stuff over to ROCm? I ask this because most of my cuda software already runs ok ish on my 1050 ti so if I went amd it may provide reasonable performance with possible ROCm development in the future. Also you mentioned ai/ml and I’d actually really like to give tensorflow a try at some point. At the moment It seems that each gpu has features that are in development (nvk vs ROCm), and whichever I go with it sounds like i’ll be crossing my fingers for each to mature at a reasonable time. At the moment I’m leaning nvida, because if nvk gains traction in a few years, It could provide a good open source alternative to switch to.

      • @[email protected]
        link
        fedilink
        -18 months ago

        They will definitely improve over time–if only because it couldn’t possibly get worse. :)

        Joking aside, they’ve made significant improvements even over the last few months. Tensorflow has a variant that supports rocm now. PyTorch does as well. Both of those happened in the last 6 months or so.

        AMD says its prioritizing rocm (https://www.eetimes.com/rocm-is-amds-no-1-priority-exec-says/). But if you read the hackernews thread on that same article you’ll see quite a few complaints and some skepticism.

        The thing about CUDA is that it has over a decade of a headstart, and NVIDIA for all its warts has been actively supporting it for that entire time. So many things just work with nvidia and cuda that you’ll have to cobble together and cross your fingers with ROCm. There is an entire ecosystem built around CUDA, so there are tools, forums, guides, etc all a quick web search away. That doesnt exists (yet) for ROCm.

        To put it in perspective: I have a 6900xt (that I regretfully sold my 3070ti to buy). I spent a week just fighting with rocm to get it to work. It involved me editing some system files to trick it into thinking my Pop_os install was Ubuntu and carefully installed JUST the ROCm driver–since I still wanted to use the open source amd drivers for everything else. I finally got it working but NO libraries at the time supported it. So all of the online guides, tutorials, etc couldn’t be used. The documentation is horrendous imo.

        I actually got so annoyed I bought a used 1080ti to do the AI/ML work I needed to do. It took me 30 minutes to install it on a headless ubuntu server and get my code up and running. It’s been working without issue for 6 months.