I’d like to host a single server in my rack at home such that everybody can have their own virtual desktop instance. Ideally, it would replace mine and the kids’ desktop PCs, and we could all access our instances via our laptops/tablets.
I’d like to get something with virtually no perceivable video latency when on the LAN, and still be useable when off the LAN (provided a sufficiently-fast internet connection).
I just watched the " Supermicro Pizza Box Server with Intel Flex GPU" Youtube video. Would something like that be a viable option?
I dont think it sounds expensive unless you have a lot of children…
The question really comes down to how the VMs are used. If you want to do machine learning while the kids are gaming (all simultaneously), then yes this would probably become expensive.
If, however, we are talking about maybe one gaming vm and a handful of others for basic “office”-type tasks (or possibly even media consumption), then it should be quite doable.
You are essentially describing the old thin client paradigm where you’d have terminals for the rank and file employees that connect to a mainframe/server hosting everyone’s respective session/state information.
I currently have a Threadripper Pro with the Supermicro M12SWA-TF motherboard and 128GB of RAM that I use as my main workstation. It’s mounted in a rack-mountable 4U chassis with a redundant 2000W PSU. It even has 2x 80mm fans that can be mounted on the back to pull air from the GPUs/PCIe cards.
I’d probably repurpose this box, so I think the only thing I’d need is the GPUs. For GPUs, my kids both have 1650GPUs and I have a Radeon Pro 3100 (don’t laugh at me).
My motivations for wanting to do this are:
My workstation is being way underutilized when used by just me
I have a very nice enterprise-grade Eaton UPS in my rack
Backing up of data/disaster recovery through snapshots of the VMs.
Free up some desk space
Seems like a fun project
The kids both do some 1080p gaming, and photo editing (Lightroom/Photoshop). One of the kids is starting to dabble in video editing (DaVinci Resolve).
The wife and I do just do basic “office stuff,” though I built this Threadripper Pro box to function as a homelab in a single box to tinker with different OSes in different VMs.
I agree with felix. If you are planning on just passing through a GPU to each VM, all you need to do is buy the GPUs and maybe a HDMI / DP fake-display plug, then RDP or maybe Moonlight / Sunshine into the VMs. For quick and dirty in-browser connectivity, Apache Guacamole is also an option.
If you want dynamic GPU allocation, that’d be a much painful project. You can check out Craft Computing’s playlist on the self-hosted Cloud Gaming Server. You need specific nvidia GPUs and some hacky drivers, of which performance will likely not really be on-par with just passed-through mid-range GPUs to each VM.
It sounds like the kids have desktops (which probably have gpu) that could be harvested. I think this is very doable if we suspend the notion of bifurcating a single gpu.
This is basically the setup I’m planning for my next desktop, but my case is simpler since I’m the only one using any of VMs, and only expect to use one at a time.
If the GPUs are desktop hardware and not a laptop soldered GTX 1650 and Pro 3100, then by all means, slapping them inside the threadripper and passing through each VM makes way more sense than GPU partitioning.
On threadripper? What are you doing on it? Or how many children do you have, to run out of PCI-E? Besides, there’s PCI-E bifurcation, which should be an option (until you run out of space in the chassis).
I don’t know how bifurcation would work from a mounting perspective without using some super glue or duct tape…
I’ve seen riser cards for bifurcating slots, but they usually have the cards perpendicular to the motherboard’s pcie slot, which, when used with a GPU would mean the GPUs would block other PCIe slots.
With that beast of a machine I’d get a couple used RTX 3090 with nvlink bridge to SLI em up, shouldn’t be more than $1700 total.
Ditch that proxmox nonsense and go multiseat, with Aster for example.
25ft+ HDMI cables are cheap enough, so are USB hubs and 25ft+ USB cables for keyboads/mouse. If that’s a no-go logistically, look at HDMI over ethernet extenders. RDP “thin clients” connecting to VMs in Proxmox or what not, make sense for remote desktop service providers selling to customers who want isolation (bc security/reliability/etc.), your kids won’t give a *** about that, they just want a gaming rig. The complexity and performance overheads are massive: thin clients are still computers, and all that RDP/Moonshine/Proxmox/passthough business will be a whole lot more PITA than a few USB hubs with some cables and Aster setup.