• nicholas_wicks87@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    The final answer is nvidia fucked up and should of used 4 8 pins for the 4090 and there’s no other answer

    • TaintedSquirrel@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Reminder, the 3090 FE used two 8-pins and was a 350W card. If they were going to use 8-pins on the 4090, they could have just used 3 at 450W.

      It wouldn’t hit 600W as the current 4090 does but it would still have more headroom than the 3090 did.

    • a5ehren@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Just adding more and more 8-pin connectors is not a sustainable solution.

      The real fix is probably to go back in time and have PCI-e 3.0 revamp the power delivery to allow more than 75W to come from the slot.

      • Joezev98@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        The real fix is probably to go back in time and have PCI-e 3.0 revamp the power delivery to allow more than 75W to come from the slot.

        And where would that pcie slot pull its power from? That’s right, another connector somewhere on the motherboard. You’d just be moving the issue elsewhere.

        • a5ehren@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Motherboard has way more room for connectors and power circuits than a GPU.

      • lordofthedrones@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        That will be on another level of pain. The 75w limit is safe and does not require rework. Having motherboards designed to deliver 200-300W on a slot would be very messy and very expensive.

        • GladiatorUA@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          And probably require a connector similar to EPS12V near the PCIe slot, which would be messy. My “genius” solution would be putting a power connector on the other side of the board, directly behind the PCI-e slot. Probably creates other issues on top of the expected ones.

      • Quatro_Leches@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        they should probably not make gpus that use that much power. clearly, they are going over board with the chip size. to the point where they are charging 3x the price of what a flagship card used to cost a decade ago. if they do something like that you can’t fault the standard, they should come up with their own solution that works if they go overboard like that