imaginary_num6er@alien.topB to Hardware@hardware.watchEnglish · 1 year agoDell reportedly restricts exports of AMD's fastest gaming GPUs to China — Radeon RX 7900 XTX, RX 7900, Pro W7900 purportedly listed as sanctioned techwww.tomshardware.comexternal-linkmessage-square50fedilinkarrow-up11arrow-down10
arrow-up11arrow-down1external-linkDell reportedly restricts exports of AMD's fastest gaming GPUs to China — Radeon RX 7900 XTX, RX 7900, Pro W7900 purportedly listed as sanctioned techwww.tomshardware.comimaginary_num6er@alien.topB to Hardware@hardware.watchEnglish · 1 year agomessage-square50fedilink
minus-squareFrom-UoM@alien.topBlinkfedilinkEnglisharrow-up1·1 year agoAmd is better at fp32 and FP64 During 2017 ish Nvidia and Amd focused on different parts with data centre cards. Amd went in on Compute with fp32 and fp64. Nvidia went full in on AI with Tensor cores and fp16 performance. Amd got faster than Nvidia in some tasks. But Nvidia’s bet on AI is the clear winner.
minus-squareResponsibleJudge3172@alien.topBlinkfedilinkEnglisharrow-up1·1 year agoNot FP32, MI300 has 48 TFLOPS, H100 has 60TFLOPs https://www.topcpu.net/en/cpu/radeon-instinct-mi300 https://www.nvidia.com/en-us/data-center/h100/#:~:text=H100 triples the floating-point,of FP64 computing for HPC. AMD FP64 still gaps Nvidia who in turn gap FP16
minus-squareFrom-UoM@alien.topBlinkfedilinkEnglisharrow-up1·1 year agoNobody knows the actual flops of the mi300 The mi250x had 95.7 tflops of fp32 due the matrix cores https://www.amd.com/en/products/server-accelerators/instinct-mi250x That’s more than the H100 even
Amd is better at fp32 and FP64
During 2017 ish Nvidia and Amd focused on different parts with data centre cards.
Amd went in on Compute with fp32 and fp64.
Nvidia went full in on AI with Tensor cores and fp16 performance.
Amd got faster than Nvidia in some tasks. But Nvidia’s bet on AI is the clear winner.
Not FP32, MI300 has 48 TFLOPS, H100 has 60TFLOPs
https://www.topcpu.net/en/cpu/radeon-instinct-mi300
https://www.nvidia.com/en-us/data-center/h100/#:~:text=H100 triples the floating-point,of FP64 computing for HPC.
AMD FP64 still gaps Nvidia who in turn gap FP16
Nobody knows the actual flops of the mi300
The mi250x had 95.7 tflops of fp32 due the matrix cores
https://www.amd.com/en/products/server-accelerators/instinct-mi250x
That’s more than the H100 even