Theoretical fully custom Threadripper CPU with Vega iGPU for Workstations?

Thinking to the custom designs of the Xbox One X, PS4 Pro and iMac Pro (all cringe to people in the FOSS community, IKR?) I thought about how only 2 Zen dies are active for Threadripper and how that would most certainly fit on the same interposer as say, a cut down Vega 32 GPU, with a shared memory controller like on the Xbox One X and PS4 Pro, and 16GB HBM2 of shared system and graphics memory.

That would be a 1 package solution for professional workstation without the need for a discrete GPU, but it would require quite a beefy VRM. I would also ponder if the LGA pins on the TR4 socket could handle a GPU and CPU at the same time. It would forgo any DDR4 support since the HBM is on the interposer of the CPU itself…

It would require what would have happened going from Kaby Lake to Coffee Lake with a pinout change on the socket, but what do you think? Would AMD sell a Threadripper CPU+Vega 32 iGPU workstation? or would a OEM take advantage of custom fab like Dell or Acer to fine tune said solution to a specific platform? (which is what the consoles all did)

Would the cost of HBM be too prohibitively expensive that GDDR5X would work better for this iGPU? If that is the case, That might involve a custom BGA package rather than Threadripper LGA cause GDDR5X DOES NOT come in DIMMs. Or do what the PS3 RSX chip did and put them on the same chip anyways.

IDK, late night musings about custom designs and how much can you cram on a CPU interposer…

Hm… theoretically it would probably maybe be possible. But the question becomes if there’s a real market for it.
I mean the target market for Threadripper is technically worksstations, those usually use professional grade GPUs anyway, not an iGPU right?

Also I wouldn’t say forgo any DDR4 support, because a) I don’t know if the IMC on TR even supports HBM (don’t think it does, why would it?) and b) customers would be out of options to expand on RAM (thinking about it that would make it perfect for Apple…).

Theoretical yes, but usefulness? Eh, idk.

For 3D modeling and rendering that could be a pretty sweet solution, especially if it would have some form of switchable graphics support. I could have all the dedicated GPUs slave away at offline rendering while the integrated GPU would let me keep working on in another program instance.

No to giving up DDRn support & expandability though.

That’s a workload that makes sense… Would also work for Davinci Resolve for using the iGPU as the display GPU and a dedicated CUDA or OpenCL GPU for processing.

Mh, I’m not really into those kinds of workloads, but don’t they also use a GPU for encoding/decoding while editing (for previews etc) and for the modeling respectively? I thought especially for CAD programs you’d need a professional GPU anyway.

Davinci works best if you dedicate a GPU to processing and use a different preview GPU. Of course, only the paid version supports this so if you got the free version for yourself, that won’t work.

I’m thinking 8GB of HBM2 woud then just be for the iGPU, and people can still use ECC DDR4 for the CPU, which means there doesn’t need to be a special IMC.

But imagine if they took the interconnect that was supposed to go to a dummy die and translate it into a interconnect for the iGPU. You could keep all 64 PCI-E Lanes and the infinity fabric takes care of the iGPU interconnect.

For architectural stuff(arguably), not so much. For engineering and scientific scenarios, the precision might be necessary, yes.

1 Like