A PC powered by a GPU?

I have no idea where else to publish this suggestion for an idea, but this has been itching in my mind for awhile — a purpose-built GPU enclosure that is nothing more than a low-powered SoC running a Linux session to bootstrap a VM which uses less of the GPU to emulate a computer, but more of the GPU for… well, GPU things, with the VM running a graphics hypervisor to send video Looking Glass-style.

With recent news about the NVidia 4090 TI, we may well be returning to the days of slot-loading CPUs Intel Itanium-style in this rather roundabout way, just because of the sheer power draw required for these things and the fact they could literally be used for hardware emulation at this rate and be a mid-range PC almost entirely on their own.

Am I crazy for thinking this? Or is this the next new thing we might see from the likes of Minisforum?

1 Like

its called xeon phi and intel killed it

1 Like

My only guess is like the GM EV-1, it was too soon for its time (and market conditions).

But like… look at the NVidia 4090 TI (I would provide an image of if only I were allowed to publish links). We have outrageous things like that and you’re going to tell me the idea isn’t marketable? Someone just has to do it right and make it simple.

It’s a four-slot GPU! Ridiculously obscene.

As others have said, this has already been tried and canceled on the GPU front. The area where this idea does look to be actively pursued is on the network side vs GPU. Check out Nvidia’s BlueField
Data Processing Units. These DPU’s run linux…

I don’t have any graduate level ECE, but on the surface the reason you wouldn’t want to emulate x86 ISA on GPUs has to do with the GPU’s architecture being highly highly inefficient at running any kind of branching code at the execution level, I’d be surprised if a 4090 could even muster 0.1% the performance of a 10 year old Pentium. All the fetch operations a CPU is expected to do would completely memory bind the GPU’s architecture.

If anyone ever did try this out as a novelty, they’d likely do it in CUDA which will add it’s own overhead to the performance… perhaps unless some form of x86 tokenization rollup was performed prior to execution as it’s own layer in the emulation stack?

​ ​ ​​​ ​ ​​

​ ​ ​​​ ​ ​​
This reminds me of the startup Soft Machine’s virtual instruction set computing (VISC) architecture (if you could call it that). It’d basically combine multiple threads/cores of a CPU into a really fast single thread, I remember reading somewhere that in the lab they could combine 4 cores into a single thread and get a x3 speed up in single threaded execution on that virtual core.

If this concept took off it would be a paradigm shift in computer science, it could mitigate amdahl’s law. I wouldn’t hold my breath for this to materialize though because this is incredibly complex stuff. Intel acquired soft machines back in 2016 and the group is still active inside Intel producing patents.
Who knows Intel might come out with a processor with this reverse hyper-threading and completely disrupt AMD.
​ ​ ​​​ ​ ​​

​ ​ ​​​ ​ ​​

​ ​ ​​​ ​ ​​

​ ​ ​​​ ​ ​​

​ ​ ​​​ ​ ​​

​ ​ ​​​ ​ ​​

This could be classified as paravirtualization.

The xeon phi’s were really x86 cpu cores that heavily leveraged special vectorization units to perform more like GPUs… the later ones could even run windows directly.

The bluefields are ARM processors hanging off of the pcie bus with a large network connection exposed.

Last I checked someone already did a RockPro64 with an Nvidia GPU. Granted the speed was just limited to a single x4 slot. Dont know what actual use that will give you because most of the useful GPU power has a x16 slot. IIRC the max power that the mobo gives to the GPU is 75W or so. Unsure how much RockPro64 can give.

Also Jetson Nano from Nvidia still exists.

Get an ITX mobo if just want something compact.

1 Like

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.