One might question why an RX 9070 card would need so much memory, but increased capacity can serve purposes beyond gaming, such as Large Language Model (LLM) support for AI workloads. Additionally, it’s worth noting that RX 9070 cards will use 20 Gbps memory, much slower than the RTX 50 series, which features 28-30 Gbps GDDR7 variants. So, while capacity may increase, bandwidth likely won’t.

  • Artyom@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    They may be hiding the stats of their unannounced cards, but not hitting 32GB would be a mistake, especially since the 5090 has 32GB. In the world of AI, ram speed won’t be anywhere near as important as ram quantity.

  • Björn Tantau@swg-empire.de
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    7 days ago

    I have never in my life regretted taking higher RAM options. Even at the cost of overall performance more RAM has always been the right choice. Especially in the long run. It greatly increased the longevity of my systems.

    But I do have several systems where I regret them having too little RAM. That’s usually with systems where I didn’t have any other choice, like the Steam Deck.

    • TheForvalaka@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      ·
      7 days ago

      I’m inclined to agree. Lower bandwidth might make some tasks take longer, but you can still accomplish them if you’re patient. When you’re out of RAM, you’re out of RAM.

      • Jessica@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        Time to let it page out to a platter hard drive so you can be extra patient while it takes an eternity performing memory swaps at 5400 RPM 😂

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    7 days ago

    32GB of VRAM for a consumer price would certainly help. I’m a bit concerned with the memory bandwidth, seems way less than what modern Nvidia cards do… But if it’s priced competitively, this might be a good choice to do a lot of AI tasks at home, especially LLM inference.

  • MalReynolds@slrpnk.net
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    6 days ago

    it’s worth noting that RX 9070 cards will use 20 Gbps memory, much slower than the RTX 50 series, which features 28-30 Gbps GDDR7 variants.

    Seeing, as the article notes, there are no 4Gb modules, they’ll need to use twice as many chips, which could mean doubling the bus width (one can dream) to 512 bit (ala 5900), which would make it very tasty. It would be a bold move and get them some of the market share they need so badly.

    • kopasz7
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 days ago

      Clamshell design. RX 7900XTX 24GB | PRO W7900 48GB

      Same GPU, same 384 bit bus.

      • MalReynolds@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Yeah, says as much in the article. This’ll most likely, if it’s not vaporware, have a 256 bus, which will be a damn shame for inference speed, just saying if they doubled the bus and sold for ≤ $1000 they’d eat the 5900 alive and generate a lot of goodwill in the influential local LLM community and probably get a lot of free ROCm development. It’d be a damn smart move, but how often can you accuse AMD of that?

        • kopasz7
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          6 days ago

          they’ll need to use twice as many chips, which could mean doubling the bus width (one can dream) to 512 bit (ala 5900)

          I misread this part, thinking you implied a bus width increase is necessary.

          For a 512 bit memory bus, AMD would either have to use 1+8 dies if they follow the 7900XTX scheme or have a monolithic behemoth like GB102. The former would have increased power draw but lower manufacturing costs, while the latter is more power efficient and more prone to defects as it’s getting close to the aperture size limit.

          I’d guess nvidia will soon have to switch to chiplet based GPUs. Maybe AMD stopped (for now?) because not their whole product stack was using chiplet based designs so they had way less flexibility with allocation and binning than with ryzen chiplets.

          • Tobberone@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            Has monolithic Vs chiplet been confirmed for 9070? A narrow buswidth on a much smaller (compared to previous I/O-die) technology would mean a whole lot in regards to surface area available for the stream processors.

  • taladar
    link
    fedilink
    English
    arrow-up
    12
    ·
    7 days ago

    One thing that might also require more memory in the future is to do both at the same time (keep LLMs loaded while gaming).

    • massive_bereavement@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      7 days ago

      There’s a Skyrim mod that uses (optionally) a local LLM to chat with NPCs. In addition you can also use speech to text and tts to just speak with the NPCs.

      However, there’s a lot of lag to make it a proper experience.

      • taladar
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        I wasn’t so much thinking about use in games as just use in other software on the system that keeps running while the game is in use. The game can coordinate its own resource usage but independent software has a harder time with that.

  • OpticalMoose@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    7 days ago

    I’ve been buying Nvidia cards up until now(my last AMD was the HD 5830), but I’d buy this. I’m not super concerned about power or bandwidth, I just want VRAM.

  • Player2@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    If there’s a 32GB model I’m buying it, simple as (unless it costs like $2000)