The W-34 series of Xeon have a ton of PCIe lanes and memory channels. That’s what you’re paying for
The W-34 series of Xeon have a ton of PCIe lanes and memory channels. That’s what you’re paying for
That’s a lot of money for a heavily downclocked RX 7600, an SSD slot, and a power supply
The thing about codec support is that you essentially have to add specific circuits that are used purely for decoding and encoding video using that specific codec. Each addition takes up transistors and increases the complexity of the chip.
XMX cores are mostly used for XeSS and other AI inferencing tasks as far as I understand. While it could be feasible to create an AI model that encodes video to very small file sizes, it would likely consume a lot of power in the process. For video encoding with relatively high bitrates it’s more likely an ASIC would consume a lot less power.
XeSS is already a worthy competitor/answer to DLSS (in contrast to AMD’s FSR2), so adding XMX cores to accelerate XeSS alone can be worth it. I also suspect Intel GPUs use the XMX cores for raytracing denoising.
There are already encoders and decoders for x.264/265/VP9/AV1 on Intel GPUs, these are codec-specific. The article of this post points to Intel increasing the capabilities of the GPU, which is usually accompanied by an increase in encoding/decoding performance and efficiency.
Does anyone really buy a 13700k and then game on it’s iGPU?
The 13700K has a considerably smaller GPU than a 1360P actually, so no.
Let’s assume you get a 14900K then, overclock it to 6 GHz P-cores using direct die cooling, and throw on 8000 MT/s DDR5 with a Z790 Apex Encore for good measure. You’re now somewhere around 30% faster in games than your previous setup, and GPU utilization will occasionally hit 90% instead of 70%
All for the neat sum of roughly 2000 USD
How do you manage to draw this out for over 20 minutes? He even gets some of the basics like Zen 4 FCLK/UCLK sync wrong.
It’s this simple:
UCLK is the clock frequency of the memory controller
MCLK is the clock frequency of the memory
FCLK is the clock frequency of the infinity fabric interconnect.
On Zen 1, these clock frequencies are always in sync.
On Zen 2 and Zen 3, running UCLK and FCLK at the same frequency reduces memory latency by a significant number of clock cycles. The goal is generally to run at the highest possible UCLK and FCLK (whichever caps out lower is the limit).
On Zen 4, running UCLK and FCLK at the same frequency provides no memory latency reduction. The goal here is to run at the highest possible MCLK and FCLK. UCLK = MCLK/2 is a very small performance deficit, so the tradeoff makes sense even if you only gain 10% MCLK.
If you’re purely gaming, there’s no reason to even consider the 14700K unless it’s bundled for a lower price than a 7800X3D setup.
If you actually do so much multithreaded work that you really notice the benefits of a 14700K over a 7800X3D, you might as well go to a 14900K, because every minute saved helps.
How would you have known that the 290 would perform better in 2016?
As for the price cuts, I remember the 780 Ti launch, and the 780 launch was effectively a cut from the Titan.
The 780 price cut was the reason for the jet engine cooler on the 290-series in the first place.
B650 doesn’t mean PCIe 5.0 support is non-existent. The requirement for B650E is that both the PCIe x16 slot and NVMe slot are PCIe 5.0-compliant
The 290X wasn’t even close to being as competitive as the also-omitted 4870
Nowhere, like it should be. The launch was a disaster because AMD pushed the clock/power targets extremely high in order to “compete” with the 780 and 780 Ti, with the end result being a jet engine in terms of noise.
It wasn’t until February before MSI and ASUS released better cooled cards, at which point the damage from reviews and lack of holiday sales had accumulated.
In the 7970’s case, AMD had at least the benefit of being 3 months early compared to Nvidia.
Most likely the 1355U due to the single threaded performance advantage, unless you’re actually compiling large projects (in which case you should probably be looking at stronger chips)
This is the beginning of another MLID
What are the P- and E-core specific ratings?
It’s like the Wii U, except with even lower adoption rates
Alder/Raptor Lake is also an idle stinker. There’s a reason battery life has regressed since 11th gen Intel
The lowest thermal throttling temp you can set on a 14900K is 62C, so you will have to settle for that.
AMD allows you to set the boost temperature target to 60C if I remember correctly, and since the V/F curve is a lot flatter on AMD CPUs in general, the performance deficit from doing so is significantly smaller