I’ve been thinking of when will the RDNA4 cards come out.
As MILD mentioned RDNA 4 will come out around Q3 2024. I don’t think there will be RDNA 3.5 refresh cards or RDNA 3 refresh cards out next year.
I think RDNA4 will be very similar to RDNA3 apart from very small arch improvements and update to Raytracing core.
There were rumors in 2022 that AMD had issues with TSMC 3nm node and that they will be using 4nm. Current rumors don’t say anything about which node will RDNA4 use but, seeing that TSMC N3E 3nm node is just being put in production and others will be using it, makes sense that AMD has to use the 4nm node in 2024. This could make AMD release the RDNA 4 before Nvidia does its new series. So, I’m thinking RDNA4 could come out end of May and be available in June 2024.
Further, I was looking at how big the RDNA 4 flagship chip will be in mm2 and what its performance could be. Taking the N31 which is based on 5nm and 6nm nodes and combined size of 530mm2. An RDNA4 best would be around 370mm to 450mm2 chip with 90-96 CUs like Rx7900 series, but with 256Bit bus, faster memory since it will use Gddr7 and rated TDP of below 280W. I came to this conclusion that 4nm TSMC node is a very small improvement in transistor density, of just 6% for the N4, (N4x or Nvidia specific 4N might be a bit more).
Looking at the 4nm node and doing the math is no wonder that AMD can’t produce a high-end GPU next year because by my math comes out that a 20-30% more performat GPU then a RX7900xtx would have to be bigger then 680mm2 and have a TDP of 410W, that’s what the 4nm node does.
But here are the all the good things, the GPU, let’s say it’s called Rx 8800 XT is out in middle of next year has 16gb of Vram for 600$, identical performance in raster compared to Rx 7900 XTX and somewhat better performance in raytracing.
There are two AMD patents on raytracing that I’ve read few months back. The first one, released 1 year prior to first RDNA 2 GPU, talks about raytracing core. But the second one, it was released in June this year. So, the latest AMD patent describes GPU withing its raytracing core, addition of a hardware specific traversal engine and specific BVH memory cache. Not to go into details, from what I understand of the two patents first patent describes raytracing core in RDNA 2&3 and the second patent describes similar but a much improved way of doing raytracing. I’m hopeful we will see this is RDNA4(the patent did arrive this june and next june we’ll have RDNA 4 card so it matches the schedule prior to RDNA2) (https://www.freepatentsonline.com/20230206543.pdf)
If RDNA4 becomes just a sidegrade anyone who has the 7900 series will not need to care. It is expected to be a 5700 XT state again. Good RT would be nice though, but I doubt AMD will run any RT game in 4K120 any time soon.
No, we wont see an AMD flagship sadly.
As you already know, AMD wont compete with Nvidias high-end 5000 blackwell series. Sure, AMD could still release an high-end flagship GPU, like a 6950 XT, or 7900XTX, but I have high doubts tbh, just dont see it based on information we already got laid out.
If anything, they would likely try to go for 3nm to incorporate AI chips like Nvidia perhaps. But if they had the means, then they wouldnt avoid competing with Nvidias high-end 5000 series. More realistically we will likely see a 4-5nm architecture, or at the very least, more efficient 5-6nm on an already, solid 530mm2.
Safe to assume tho that we will get new GPUs by 2024 Q3 or Q4 regardless. Reason being that GDDR7 will be released in mid 2024, as was released officially by micron. Some GPUs have also seen over 2 years of use since their release. Timing wise its ideal to start a new GPU generation. Pretty sure they wont bother releasing anything before the new GDDR7 release however. Consumers will hold out until then.
AMD wont compete with Nvidias high-end 5000 blackwell series
Not with RDNA4, but that does not meant they won’t compete with 5000 series at all, RDNA5 high end is apparently coming out 2025.
MLID is a bullshit mill that spits out a bunch of self-contradictory guesses based on absolutely no evidence, in the hopes one of them will be close to correct.
Rdna 4 with gddr7 probably will get nice bump in performance
RDNA4 will be N4-based, targeting to launch in H2 2024. We could probably see CDNA4 and RDNA4 launching at the same year, and we’ll get some inspirations about what RDNA5 will look like from CDNA4’s packaging.
RedGamingTech says that for RDNA4, only Navi 44 and Navi 48 are rumored, mawing out at 192 bits and 64 CUs. 7900 XTX will still be the most mowerful GPU.
Stopped reading at “MLID”, the guy just throw whatever info/infox he has at the wall and sometimes it turns out to be true.
One of the fun benefits of the Navi31 die, AMD just needs to adapt the Navi31 main die to the newer fabrication node plus minor updates and polish. The I/O dies hold the memory controllers, so adapting to GDDR7 is just redesigning the I/O dies. If the RDNA4 I/O dies support GDDR7 and are compatible with Navi31, making the Navi31 refresh support GDDR7 won’t require too much effort.
One of the fun benefits of the Navi31 die, AMD just needs to adapt the Navi31 main die to the newer fabrication node plus minor updates and polish. The I/O dies hold the memory controllers, so adapting to GDDR7 is just redesigning the I/O dies. If the RDNA4 I/O dies support GDDR7 and are compatible with Navi31, making the Navi31 refresh support GDDR7 won’t require too much effort.
Yes, but would this be usefull?
It depends on where bottlenecks lie. Commonly, VRAM bandwidth is such a major bottleneck that graphics engines are designed and tuned to take advantage of the excess GPU power over the VRAM bandwidth limits. For an object in VRAM to be rendered, it must occupy VRAM bandwidth per frame. All the extra VRAM capacity that many tout just allows bigger levels to be stored in VRAM without the need for memory management. If you want higher detail objects in VRAM and rendering them as well, you need bandwidth to accompany the capacity.
Given ray tracing typically increases VRAM memory usage by a substantial amount, it is a wonder, and worthy experiment to see how much VRAM bandwidth is the bottleneck for such performance.
That would be a good solution if the card was massively ahead of the XTX. 4090 levels at least. And it would still be behind in RT likely.
As MILD mentioned RDNA 4 will come out around Q3 2024.
Stopped reading right there
RDNA 4 - probably -
Q4 2024 - Paper “soft” lunch
Q1 2025 - General availability…
No highend for now, but that is still questionable… I really hope that AMD will use this time REAL RT cores that are massively better. The rest will be - nah… Blackwell will destroy AMD this time…
There’s no way we’d be getting a June release of RDNA 4 without some sort of announcement or more serious leaks. People be rumoring the RTX 5000 cards as a 2025 release, and yet we hear nothing about AMD and their next release is only 7 months out?
Your analysis about performance / size is sensible, but there’s absolutely no way we would see a release so soon.
There is pretty big difference in timeline of credible leaks for Nvidia versus AMD for past few major releases - AMD leaks start significantly closer to launch date. So the fact that there are no leaks for RDNA4 at this time does not mean summer release is too soon.
Obviously for the same reason release date is pure speculation at this point.
We have gotten the first “changes” in Linux drivers for RDNA4
Yea I dunno where OP got June next year. He said around Q3 next year…which is July - September…meaning it could come end of September.
RDNA3 entries to Linux were made 6-8 months before release of the 7900 series. RDNA4 entries were made about a week ago. July would be 7 months from now.
That being said, when RDNA3 entries were made, they also made the 7800xt and 7600xt entries, and those didn’t launch until like 12-18 months after the first entry. But I think 12 months at the latest from now makes sense. Another November release date like RDNA2.
Last gen, Lovelace was rumored as late and RDNA3 was rumored early.
Let’s just say I am skeptical
That is fair to say, I am making a prediction here and its pure speculation.
But Rx5700 Xt released in Jul 7, 2019, RX 6800Xt Nov 18, 2020 and Rx7900 XTX Dec 13, 2022. So, it would be a year and a half from previous series.It was 1.5 years for the 5000 and 6000, then 2 years from 6000 to 7000. 2 years from 7000 would be Dec 2024 or later. The announcements also come a few months before; if we were getting a card in June 2024 we’d be hearing about it by now, at least rumors if not official announcements.
An early release would be smart, especially if they don’t plan to compete at the high end in the next series of cards; get the folks who’d want to upgrade now, before Nvidia releases their cards in Q1 2025.
Sure, more often then not it was closer to 2 years apart from each series releases.
I agree with you that an early release would be smart if they are not competing in the high end with RDNA4.
GFX12 was add this week. That comparision
For RDNA3, gfx11 was added on April 29, 2022 and the cards were launched Dec. 13 2022. RDNA2 (GFX1030) was added on Jun 16, 2020 and released November 18, 2020
Not even close - December 24’ “paper” lunch is a wishful thinking even…
Some “Super” cards or “refresh” would be possible - but 5% chance at best…
They already stated there wont be a flagship this time around. Its all about midend now
Upping the CUs from 96 to 128 (and the ROPs similarly) will increase the GCD size from ~305 to ~372 mm^(2), based on the die image (and leaving some blank space at the side), and the total to 596 mm^(2). Whether performance will increase performance enough depends on the RAM bottleneck.
It’s also worth nothing that RDNA 3 apparently didn’t reach the expected clocks, and if AMD managed to solve this problem it would be possible to get extra performance without much or any extra die space.
In general if you’re just aiming to reduce chip size by removing two MCDs, that’s not really that much of a cost saving. That won’t make the chip a mid-range chip as rumoured.
They could make a much larger die, if its really just 300 mm².
They could, but N5/N4 is significantly more costly than N7/N6 (was estimated to be 1.7x more costly, IIRC). That’s why AMD is trying to keep the size under control by using chiplets.
In my opinion, on the RDNA3 not reaching expected core clocks, its mostly down to poor performance of TSMC 5nm node to what AMD wanted to achieve. There target might been RTX4090 flagship performance but on the 5nm plus 6nm, and TDP 355W resulted in high power consumption, and to keep power down, core clock has to suffer.
Rx 7900XTX Taichi model has roughly 250mhz more in boost and a roughly 50W more power consumption then the AMD model.(https://www.techpowerup.com/review/asrock-radeon-rx-7900-xtx-taichi/38.html)
I have Taichi on water, 3.1 GHZ clock - what more you may need? 425W~ consumption. 2.95Ghz stable game clock…
All that is before i will update the BIOS to Aqua monster…550W+~ without OC…
thing is, people can’t seem to get much clock boost that translates into performance out of the 7900 series. 3200mhz overlocking tests doesn’t do much more than 10-15% from AIBs, and consumes 500w+
if you look at timespy results, going from 2800-3000mhz game clock to power unlimited 3500mhz (which consumes over 500W), you gain a few thousand points
time spy with a random “stockish” result and the 7900xtx overclock from here. most of the gains came from the 6.0GHz oveclock on the CPU, with graphics results only being about 12% faster with this overclock
unless i’ve missed something, it seems like the 7900 series is limited by more than power
I don’t see GPUs having more then 3Ghz anytime soon as standard clocks. High core frequencies make a small chip have much better performance, so its cost efficient for AMD to make them but high core clocks are increasing power consumption. AMD is trading and evaluating size of the chip with its frequency to get the performance.
AMD memes a lot but I don’t think they’d just pull their pants down and shit right in everyone’s faces who bought a 7900xtx that soon after.
AMD is known for screwing launch buyers by dropping prices hard when competition doesn’t allow them to keep theie high prices. Like look how hard and fast Zen 4 pricing fell after 13th gen launched. I know someone will think ‘its good they reacted to competition’, and that’s true, but AMD knew a $300 6 core non-X3D wouldnt be competitive, but they didn’t care and let early adopters overpay.
But I don’t think AMD is going to start pricing at $600, that’s too low of a ‘starting’ price for their ‘flagship’. I could see $750-$800 but definitely not as low as $600.
I’m honestly pretty sad there won’t be a 8900 XTX or whatever. I’m not excited to see Nvidia just go ape shit with their prices.
Stopped reading at MLID.