The die space is only one part of the puzzle. The other - AMD’s achilles heel no less - is software support. I mean, Phoenix has XDNA already, but from everything I’ve read, it’s a PITA to actually use and rather limited by its currently available driver API, and as a consequence, barely any ML library/framework support as of now.
Of course they could. Intel does on their graphics cards. Apple does on its latest silicon.
Question is, do they have the people that could develop this, can they and do they want to spend the money on it and can they and do they want to spend the money on the software side it this as well.
Currently, it seems like they looked at it, did the math and decided to try and get by without the effort. And to a degree that’s doable. FSR2 isn’t as good as DLSS but it saves them the effort to have AI-cores on chip. Now they did the same with frame generation. Generally they seem to be able to be slightly worse for a lot less R&D-budget.
Of course, they will never leave nVidia’s shadow this way and should Intel or nVidia ever manage to offer Microsoft and Sony an APU to power the next generation of consoles but with more features, their graphics division might be well and truly fucked.
Keep in mind too if they haven’t already made these decisions to inovate and invest 4+ years ago, then any solution they come up with is still years away. Chip development is a 5+ year cycle from concept to implementation.
This. AMD struggles with making drivers that don’t crash or get you VAC banned. They’re going to have to clear that bar before they can really start competing
AMD is better than Intel on both gpu and cpu front lol. Not sure what you are on.
Idd I think AMD has solid gpu products last decade. Had several AMD gpus just as nvidia. Just because nvidia has been ahead last 3 years doesn’t invalidate AMD. Its competition and as long as they offer decent performance for the price ppl will buy it. RDNA2/3 was definitely not bad architectures - the main gap atm is upscalers and framegen but that is also reflected in the price nvidia sells for.
Well, sure the application specific IP is always going to be more performant. But on a pinch, shader ALUs can do tensor processing just fine. But without a proper software stack, the presence of tensor cores is irrelevant ;-)
Can they make dedicated tensor to units or is that patented?
AMD got a lot of AI-related IP when they made the acquisition of Xilinx. It’s just a matter of them dedicating the die space to it.
The die space is only one part of the puzzle. The other - AMD’s achilles heel no less - is software support. I mean, Phoenix has XDNA already, but from everything I’ve read, it’s a PITA to actually use and rather limited by its currently available driver API, and as a consequence, barely any ML library/framework support as of now.
“Tensor Units” are just low-precision matrix multiplication units.
They have their own equivalent in the CDNA line of compute products.
They absolutely could bring matrix multiplication units to their consumer cards, they just refuse to do so.
Just like they refuse to support consumer cards officially
Of course they could. Intel does on their graphics cards. Apple does on its latest silicon.
Question is, do they have the people that could develop this, can they and do they want to spend the money on it and can they and do they want to spend the money on the software side it this as well.
Currently, it seems like they looked at it, did the math and decided to try and get by without the effort. And to a degree that’s doable. FSR2 isn’t as good as DLSS but it saves them the effort to have AI-cores on chip. Now they did the same with frame generation. Generally they seem to be able to be slightly worse for a lot less R&D-budget.
Of course, they will never leave nVidia’s shadow this way and should Intel or nVidia ever manage to offer Microsoft and Sony an APU to power the next generation of consoles but with more features, their graphics division might be well and truly fucked.
Keep in mind too if they haven’t already made these decisions to inovate and invest 4+ years ago, then any solution they come up with is still years away. Chip development is a 5+ year cycle from concept to implementation.
They don’t even need to make dedicated tensor units, since programmable shaders already have the necessary ALU functionality.
The main issue for AMD is their software, not their hardware per se.
This. AMD struggles with making drivers that don’t crash or get you VAC banned. They’re going to have to clear that bar before they can really start competing
Those VAC bans really kinda sum the lack of ability with AMD’s software. AMD can’t ship fluid frames without literally getting you banned.
Stop for a moment and think about this, AMD can’t even catch up to nvidia/intel much less be at the forefront.
Really, AMD only exists so nvidia doesn’t charge $2k for a 4090… so uh thanks AMD for being a joke of a competitor but saving me $400
AMD is better than Intel on both gpu and cpu front lol. Not sure what you are on.
Idd I think AMD has solid gpu products last decade. Had several AMD gpus just as nvidia. Just because nvidia has been ahead last 3 years doesn’t invalidate AMD. Its competition and as long as they offer decent performance for the price ppl will buy it. RDNA2/3 was definitely not bad architectures - the main gap atm is upscalers and framegen but that is also reflected in the price nvidia sells for.
Lol this sub just has it out for RTG for a few months now. The most ridiculous takes get upvoted.
WTF are you talking about, what Intel GPU is better than AMD? No one is buying Intel’s trash video cards. Also the 7800X3D is the fastest gaming chip.
Nah, throughput of tensor cores is far to high to compete against
Well, sure the application specific IP is always going to be more performant. But on a pinch, shader ALUs can do tensor processing just fine. But without a proper software stack, the presence of tensor cores is irrelevant ;-)