• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: October 25th, 2023

help-circle
  • It totally depends on what chips are outsourced. Intel has been a TSMC customer for many years.

    Intel 4 and Intel 20A are not library complete nodes - they’re optimized specifically for x86 compute tiles. Intel 3 and 18A are the refined, library complete versions of these nodes.

    Lunar Lake combines NPU, iGPU, and Compute on a single tile, so Intel 4 and 20A are not viable nodes. Arrow Lake has Compute on its own tile, so it’s using 20A. Lunar Lake either be delayed 6 - 9 months and wait for 18A, or it can launch on TSMC N3. N3 was likely a better choice than Intel 3 - either due to capacity constraints (Granite Rapids and Sierra Forest will be launching on Intel 3 in the same year), or it could be due to performance (N3 could just be better suited for GPU - or it would be too costly to try to and port the Arc iGPU to Intel 3 just for Lunar Lake).

    Intel’s business structure has changed. Their nodes and design teams aren’t working tightly together anymore like in the past, where Intel nodes were highly optimized to work with their own designs, and their designs were not portable to other foundries: Intel Fab designs standardized nodes now that compete for customers, and Intel design has more flexibility in which fabs they choose for their now-portable designs. (One recent change is that Intel design teams has to pay for foundry steppings from their own budget, rather than Foundry eating the cost)



  • Intel’s competitor to CUDA is oneAPI and SYCL. Intel poses no threat to Nvidia GPUs in datacenter in the near term, but that doesn’t mean Intel won’t still secure contracts.

    Intel’s biggest threat to Nvidia is against Nvidia’s laptop dGPU volume segment. Arc offers synergies with Intel CPUs, a single vendor for both CPU and GPU for OEMs, and likely bundled discounts for them as well. A renewed focus on improving iGPUs also threatens some of Nvidia’s low end dGPUs in laptops - customers don’t have to choose between very poor performance iGPU or stepping up to a dGPU, and now iGPUs will start to become good enough that some customers will just opt to not buy a low end mobile dGPU in coming years.


  • Arc is realistically a bigger threat to AMD than it is to Nvidia. The second half of the 2020’s will be AMD and Intel competing over second place for desktop dGPUs.

    For mobile, Arc iGPUs, while obviously not matching dedicated GPUs, can realistically offer good enough performance to some people who want to do light gaming, then stepping up to a low end dGPU just to make sure Minecraft, Fortnight, etc. can at least run may not be worth the extra cost.

    Either way, I think Intel’s heavy focus on putting Arc in all of their Core Ultra CPUs and heavily focusing on iGPU can be a potentially bigger disruptor than their desktop dGPUs, at least in the nearterm.












  • The problem with this line of reasoning is that we don’t buy CPUs in datacenter - we buy full servers from 3rd party suppliers. Look at a company such as CDW.

    Most server customers aren’t massive hyperscalers that need to maximize computer per rack-U. Xeon servers are plentiful, can be found for cheaper than their CPU MSRP’s would suggest, and if we’re looking for a 16 - 24 core model to spin up a branch office, a lot of times we don’t even really care whether it’s SPR or Epyc. There’re other factors like “Is my LOB app certified by the vendor to run on Epyc”? etc.

    A lot of times, when I need to order, a lot of the Epyc servers may be on backorder, or may be comparable pricing, or maybe I specifically need Xeon because I already have a Xeon Hyper-V server and want this server to be able to work in the event of a failover (best to keep your VMs on the same platform).

    Hell, in Windows Server, Epyc only got support for nested virtualization with Server 2022, so it wasn’t even a consideration when we did a big refresh a few years back.