AMD’s RT hardware is intrinsically tied to the texture unit, which was probably a good decision at the start since Nvidia kinda caught them with their pants down and they needed something fast to implement (especially with consoles looming overhead, wouldn’t want the entire generation to lack any form of RT).
Now, though, I think it’s giving them a lot of problems because it’s really not a scalable design. I hope they eventually implement a proper dedicated unit like Nvidia and Intel have.
There’s not really anything intrinsically /wrong/ with tying it to the same latency-hiding mechanisms as the texture unit (there’s nothing in the ISA that /requires/ it to be implemented in the texture unit, more likely that’s already the biggest read bandwidth connection to the memory bus so may as well piggyback off it). I honestly wouldn’t be surprised if the nvidia units were implemented in a similar place - as it needs to be heavily integrated to the shader units, while also having a direct fast path to memory reads.
One big difference is that nvidia’s unit can do a whole tree traversal with no shader interaction, while the AMD one just does a single node test and expansion then needs the shader to queue the next level. This means AMD’s implementation is great for hardware simplicity, and if the there’s always a shader scheduled that is doing a good mix of RT and non-RT instructions it’s not really much slower.
But that doesn’t really happen in the real world - the BVH lookups are normally all concentrated to an RT pass and not spread over all shaders over the frame. And that batch tends to not have enough other work to be doing to fill the pipeline while waiting for the BVH lookups. If you’re just waiting on a tight loop of BVH lookups, the pass back to the shader to just submit the next BVH lookup is a break in the pipelining or prefetching you might otherwise be able to do.
But it might also be more flexible - anything that that looks a bit like a BVH might be able to do fun things with the BVH lookup/triangle-ray intersection instructions, not just raytracing, but there simply doesn’t seem to be a glut of use cases for that as-is. And then unused flexibility is just inefficiency, after all.
I doubt they were surprised at all. Isn’t RDNA3 very similar to RDNA2? They could have fixed it there, and they decided on minor improvements instead.
Wasn’t RDNA2 designed with Sony and Microsoft having an input on its features? I’m sure Sony and MS knew what was coming from Nvidia years in advance. I think Mark Cerny said developers even wanted a 16 core originally, and they were talked out of it, because they had die area restrictions. RT hardware area on those consoles probably would have equaled an extra 8 CPU cores in area if they wanted Nvidia-like RT. All just seems like cost optimization to me.
I mistook 6700XT specs, thinking it was comparison to 7800XT with 60CU but the actual comparison is 6800 vs 7800XT.
That being said
7600 is 20% faster in Alan wake, 10% more in Cyberpunk than 6600XT with same CU count at 1080p
7800XT is 37% faster than 6800 at 1440p in Alan Wake, no 6800 in Cyberpunk, you would have to infer based on 6700XT, which shows good 35% improvements 7800XT vs 6800, with an inference of 6800 being 20% faster than 6700XT like in Alan Wake with same 60CU
Microsoft told Digital Foundry that they had locked the specs of the Xbox Series consoles in 2016. In 2016 they knew the console would have an SSD, RT capabilities etc.
AMD’s RT hardware is intrinsically tied to the texture unit, which was probably a good decision at the start since Nvidia kinda caught them with their pants down and they needed something fast to implement (especially with consoles looming overhead, wouldn’t want the entire generation to lack any form of RT).
Now, though, I think it’s giving them a lot of problems because it’s really not a scalable design. I hope they eventually implement a proper dedicated unit like Nvidia and Intel have.
I am pretty sure they already have something in the pipeline, its just that it can take half a decade from low level concept to customer sales…
That’s what RDNA4 will introduce.
There’s not really anything intrinsically /wrong/ with tying it to the same latency-hiding mechanisms as the texture unit (there’s nothing in the ISA that /requires/ it to be implemented in the texture unit, more likely that’s already the biggest read bandwidth connection to the memory bus so may as well piggyback off it). I honestly wouldn’t be surprised if the nvidia units were implemented in a similar place - as it needs to be heavily integrated to the shader units, while also having a direct fast path to memory reads.
One big difference is that nvidia’s unit can do a whole tree traversal with no shader interaction, while the AMD one just does a single node test and expansion then needs the shader to queue the next level. This means AMD’s implementation is great for hardware simplicity, and if the there’s always a shader scheduled that is doing a good mix of RT and non-RT instructions it’s not really much slower.
But that doesn’t really happen in the real world - the BVH lookups are normally all concentrated to an RT pass and not spread over all shaders over the frame. And that batch tends to not have enough other work to be doing to fill the pipeline while waiting for the BVH lookups. If you’re just waiting on a tight loop of BVH lookups, the pass back to the shader to just submit the next BVH lookup is a break in the pipelining or prefetching you might otherwise be able to do.
But it might also be more flexible - anything that that looks a bit like a BVH might be able to do fun things with the BVH lookup/triangle-ray intersection instructions, not just raytracing, but there simply doesn’t seem to be a glut of use cases for that as-is. And then unused flexibility is just inefficiency, after all.
I doubt they were surprised at all. Isn’t RDNA3 very similar to RDNA2? They could have fixed it there, and they decided on minor improvements instead.
Wasn’t RDNA2 designed with Sony and Microsoft having an input on its features? I’m sure Sony and MS knew what was coming from Nvidia years in advance. I think Mark Cerny said developers even wanted a 16 core originally, and they were talked out of it, because they had die area restrictions. RT hardware area on those consoles probably would have equaled an extra 8 CPU cores in area if they wanted Nvidia-like RT. All just seems like cost optimization to me.
RDNA3 is up to 60% faster than RDNA2 equivalent in epath tracing
Isn’t that only because RDNA3 is available with that many more CUs? Afaik the per-unit and per-clock RT performance of RDNA3 is barely ahead of RDNA2.
I mistook 6700XT specs, thinking it was comparison to 7800XT with 60CU but the actual comparison is 6800 vs 7800XT.
That being said
7600 is 20% faster in Alan wake, 10% more in Cyberpunk than 6600XT with same CU count at 1080p
7800XT is 37% faster than 6800 at 1440p in Alan Wake, no 6800 in Cyberpunk, you would have to infer based on 6700XT, which shows good 35% improvements 7800XT vs 6800, with an inference of 6800 being 20% faster than 6700XT like in Alan Wake with same 60CU
https://www.techpowerup.com/review/alan-wake-2-performance-benchmark/7.html
https://www.techpowerup.com/review/cyberpunk-2077-phantom-liberty-benchmark-test-performance-analysis/6.html
Microsoft told Digital Foundry that they had locked the specs of the Xbox Series consoles in 2016. In 2016 they knew the console would have an SSD, RT capabilities etc.
They could already have in mind the next console