Options for a local "hosted desktop" server?

I’d like to host a single server in my rack at home such that everybody can have their own virtual desktop instance. Ideally, it would replace mine and the kids’ desktop PCs, and we could all access our instances via our laptops/tablets.

I’d like to get something with virtually no perceivable video latency when on the LAN, and still be useable when off the LAN (provided a sufficiently-fast internet connection).

I just watched the " Supermicro Pizza Box Server with Intel Flex GPU" Youtube video. Would something like that be a viable option?

1 Like

Welcome to the forum!

This project of yours sounds expensive. How much do you expect to spend or how much is your budget?

This is an old video and I dont really know how relevant this is. Is this something you want?

1 Like

I dont think it sounds expensive unless you have a lot of children…

The question really comes down to how the VMs are used. If you want to do machine learning while the kids are gaming (all simultaneously), then yes this would probably become expensive.

If, however, we are talking about maybe one gaming vm and a handful of others for basic “office”-type tasks (or possibly even media consumption), then it should be quite doable.

You are essentially describing the old thin client paradigm where you’d have terminals for the rank and file employees that connect to a mainframe/server hosting everyone’s respective session/state information.

Why would the kids not be able to do what they need to do on their laptops and tablets? Why do you need a remote desktop?

2 Likes

I currently have a Threadripper Pro with the Supermicro M12SWA-TF motherboard and 128GB of RAM that I use as my main workstation. It’s mounted in a rack-mountable 4U chassis with a redundant 2000W PSU. It even has 2x 80mm fans that can be mounted on the back to pull air from the GPUs/PCIe cards.

I’d probably repurpose this box, so I think the only thing I’d need is the GPUs. For GPUs, my kids both have 1650GPUs and I have a Radeon Pro 3100 (don’t laugh at me).

My motivations for wanting to do this are:

  1. My workstation is being way underutilized when used by just me
  2. I have a very nice enterprise-grade Eaton UPS in my rack
  3. Backing up of data/disaster recovery through snapshots of the VMs.
  4. Free up some desk space
  5. Seems like a fun project

The kids both do some 1080p gaming, and photo editing (Lightroom/Photoshop). One of the kids is starting to dabble in video editing (DaVinci Resolve).

The wife and I do just do basic “office stuff,” though I built this Threadripper Pro box to function as a homelab in a single box to tinker with different OSes in different VMs.

2 Likes

Do you need to be able to dynamically assign the GPUs to VMs?
If the GPUs are strictly passed through one per VM this is an easy feat in proxmox.

3 Likes

I agree with felix. If you are planning on just passing through a GPU to each VM, all you need to do is buy the GPUs and maybe a HDMI / DP fake-display plug, then RDP or maybe Moonlight / Sunshine into the VMs. For quick and dirty in-browser connectivity, Apache Guacamole is also an option.

If you want dynamic GPU allocation, that’d be a much painful project. You can check out Craft Computing’s playlist on the self-hosted Cloud Gaming Server. You need specific nvidia GPUs and some hacky drivers, of which performance will likely not really be on-par with just passed-through mid-range GPUs to each VM.

1 Like

It’s funny how this came around 1 day later after I was posted this.

Still doesn’t change the difficulty of setting up GPU partitioning.

It sounds like the kids have desktops (which probably have gpu) that could be harvested. I think this is very doable if we suspend the notion of bifurcating a single gpu.

This is basically the setup I’m planning for my next desktop, but my case is simpler since I’m the only one using any of VMs, and only expect to use one at a time.

Oh, I misunderstood that.

If the GPUs are desktop hardware and not a laptop soldered GTX 1650 and Pro 3100, then by all means, slapping them inside the threadripper and passing through each VM makes way more sense than GPU partitioning.

1 Like

Then the only snag is ensuring the motherboard has adequate pcie slots for all 3 to run relatively unfettered…

Well, except for the limitation of running out of PCIe slots…

On threadripper? What are you doing on it? Or how many children do you have, to run out of PCI-E? Besides, there’s PCI-E bifurcation, which should be an option (until you run out of space in the chassis).

I don’t know how bifurcation would work from a mounting perspective without using some super glue or duct tape…

I’ve seen riser cards for bifurcating slots, but they usually have the cards perpendicular to the motherboard’s pcie slot, which, when used with a GPU would mean the GPUs would block other PCIe slots.

Your motherboard seems to have 6 pcie x16? Isn’t that sufficient for two 1650 and your card?

With that beast of a machine I’d get a couple used RTX 3090 with nvlink bridge to SLI em up, shouldn’t be more than $1700 total.
Ditch that proxmox nonsense and go multiseat, with Aster for example.
25ft+ HDMI cables are cheap enough, so are USB hubs and 25ft+ USB cables for keyboads/mouse. If that’s a no-go logistically, look at HDMI over ethernet extenders. RDP “thin clients” connecting to VMs in Proxmox or what not, make sense for remote desktop service providers selling to customers who want isolation (bc security/reliability/etc.), your kids won’t give a *** about that, they just want a gaming rig. The complexity and performance overheads are massive: thin clients are still computers, and all that RDP/Moonshine/Proxmox/passthough business will be a whole lot more PITA than a few USB hubs with some cables and Aster setup.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.