There is some wisdom in not trying to use the x86 ISA for literally everything.
Linus’ video was really quite good. He did a lot of homework there and got to talk to a lot of cool people, sounds like. I’m not going to say it’s fundamentally wrong to try to make x86 into a graphics device, but well there are some things that are really tough to solve in software and have the benefits of the hardware.
Think about hardware x264 vs software x264.
Thus far I would say that graphics cards have been designed such that they can provide features for implementing, say, h265 without directly (inflexibly) implementing raw h265 in hardware. The cuda “cores” are vectorized, sure, if you want to describe it that way and built with “common” game operations in mind.
Even if the compute uint could be more like x86 cores… you’re going to add a lot of latency to the stack which offsets “gaming” performance. Whether that offset is fatal is another matter I would concede, but I would say probably that’s why we’ve seen intel waffle on this thing like crazy.
We are now getting close to hardware ray tracing, which I think perhaps swings the pendulum farther away from “massively parallel vectorized x86” (or anything really that approaches non-risc/VLIW type architectures).
Jim Keller might come up with something really neat though that is x86 compatible, but is in no way related. Long pipelines don’t work for games the way they can work for AI and computation tasks…
No, I suspect “massively parallel vectorized x86” of the future is actually sticking the basic operational computation circuity actually in the ram itself. then the compute device that’s paired with can work nicely with the workload at hand. Who knows maybe we’ll see fpgas on gpus of the futre lol…