If even half of Intel’s claims are true, this could be a big shake up in the midrange market that has been entirely abandoned by both Nvidia and AMD.
If even half of Intel’s claims are true, this could be a big shake up in the midrange market that has been entirely abandoned by both Nvidia and AMD.
Still weird there’s no BYO-RAM cards. They’d suffer in terms of performance, but… possibly not as much as you’d think. Modern GPU caching is all about waiting. Cache miss? Send request, switch context, come back later.
The general solution is making integrated graphics not suck. Oh sorry, the ten-dollar term is “heterogeneous compute.” Multi-architecture parallelism, the way of the future! And also the Xbox 360.
On the lowest level, GPUs are programmed directly against their physical RAM. Many sections are highly specialized for specific stream processors (shaders etc) and swapping data between these specialized physical registers is one of the most optimal ways to perform parallel work in the GPU.
It’s really unfortunate, but GPUs are horribly complex at their lowest level and abstractions of that complexity generally perform something like >80% worse.
Hopefully some genius some day figures out how to resolve this, but for now your hardware changing means you need entirely new firmware, and the optimal paths in the GPU wouldn’t be optimal anymore