• Varyk
    link
    fedilink
    arrow-up
    15
    ·
    9 months ago

    What does this mean? Thanks I don’t understand the terms here

    • TechNom (nobody)@programming.dev
      link
      fedilink
      English
      arrow-up
      32
      ·
      edit-2
      9 months ago

      CUDA is an API to run high performance compute code on Nvidia GPUs. CUDA is proprietary. So CUDA programs run only on Nvidia GPUs. Open alternatives like vulkan compute and opencl aren’t as popular as CUDA.

      Translation layers are interface software that allow CUDA programs to run on non-Nvidia GPUs. But creating such layers require a bit of reverse engineering of CUDA programs. But they are prohibiting this now. They want to ensure that all the CUDA programs in the world are limited to using Nvidia GPUs alone - classic vendor lock-in by using EULA.

      • Varyk
        link
        fedilink
        arrow-up
        11
        ·
        9 months ago

        Thank you, that’s simply enough that I can understand what you’re saying, but complex enough that all of my questions are answered.

        Great answer

      • mindbleach
        link
        fedilink
        arrow-up
        4
        ·
        9 months ago

        … and in addition to the specifics of this abuse, fuck EULAs in general.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 months ago

      CUDA is a system for programming GPUs (Graphics Processing Units), and it can be used to do far more computations in parallel than regular CPU programming could. In particular, it’s widely used in AI programming for machine learning. NVIDIA has quite a hold on this industry right now because CUDA has become a de facto standard, and as a result NVIDIA can price its graphics cards very high. Intel and AMD also make powerful GPUs that tend to be cheaper than NVIDIA’s, but they don’t natively support CUDA, which is proprietary to NVIDIA. A translation layer is a piece of software that interprets CUDA commands and translates them into commands for the underlying platform such as an AMD graphics card. So translation layers allow people to run CUDA software, such as machine learning software, on non-NVIDIA systems. NVIDIA has just changed its licence to prohibit this, so anyone using CUDA has to use a natively CUDA-capable machine, which means an NVIDIA one.

      • Varyk
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        Thank you, these are really great entry-level answer s so that I can understand what the heck is going on.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      You can’t use CUDA drivers and then insert translation layer, that translates calls to NVIDIA hardware to calls to non-NVIDIA hardware and use non-NVIDIA hardware with CUDA.

      • Darkrai@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        Do you think this is something the EU will say is anti-competitive or something? I don’t think current late-state capitalism America will do anything.

        • 520@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          Oh the EU will definitely call this anticompetitive. Especially when nVidia have a monopoly in the AI segment as is.