Virtualisation Build for Remote Desktop (Work, Play, Uni and JupyterLab environment)

With a more permanent home office setup, I am planning to modify my Threadripper system a little bit and turn it into a virtualisation host.

The idea is to run a hypervisor (KVM or Xen) on the Threadripper system, add a second GPU and pass both GPUs to one VM each (one as my GPU enabled Jupyter environment, one for work and one for play :wink: So that I can enjoy myself with a bit of gaming whilst a model is training or I feel like it, but I have a dedicated GPU for longer, but not time critical training tasks. If I need more power, I can still shut down all of those VMs and launch one with all three GPUs passed through (at least, thatā€™s the idea).

I then want to build a second machine with a bigger emphasis on power efficiency that runs one VM for all my university related stuff and one to remote into either of the other three VMs. (Basically just a ā€œthin client VMā€ that uses something like an iGPU passed through as long as it can drive 3 Displays.

Did anyone do anything like this before (any recommendations on Hypervisors and/or ideas how to set the thin client VM up) or might this even be a silly idea (if so, why?)

Iā€™d just really like to separate my work, play and study systems without having the hassle of having to reboot (which sometimes I canā€™t when I have some stuff running and waiting for it to complete) and also enables me to easily run different distros for each task (I am planning on OpenSuse with KDE for my Work VM, Pop!_OS with Gnome for play, Elementary for Uni and not sure which one yet for my JypyterLab setup.

If this turns out to be feasible, Iā€™ll also update you on the progress, of course.

1 Like

I am afraid this technology doesnā€™t exist yet.

We need:

Detachable Passthrough GPU vfio-pci

Deatch GPU from host whilst in VM remote login into host (via an icon, or whatever app, but something practical). Then seamlessly detach the GPUā€™s from VM into host whilst having a remote desktop of the vm ready on host.

Some sort of 'hotpluggable passthrough.

In your theoretical set up, do you want those three OS VMs running at the same time?

Let me understand this better.

1 threadripper,

HOST -> no gpu

GPU1 -> VM1
GPU2 -> VM2

3 monitors

The thin client part is confusing:

2 Computer less powerful.

Thin Client iGPU -> 3 monitors

So wait, the thin client running a VM with vfio-pci iGPU to log into the 3 remote VMs?

If I where you, first pull of your first VM gpu passthrough, get a feeling of it ONLY then move forwards. This is much like a discovery journey that youā€™ll see what fits your needs better along the way.

One thing to keep in mind about GPU passthrough is how to decide to set up the monitors, which is the part most cumbersome.

Choose monitors with several hdmi/displayport inputs for maximum flexiblity.

I think I made quite mess explaining it, but you are getting close to it, yes.

I drew it up to make it easier to follow:

So, in essence, the Threadripper runs three systems, each of them permanently runs with its own GPU by default and runs pretty much all the time. Two of those GPUs I already have.

Then I have a second system hosting a 4th VM for all my Uni stuff (pretty much only writing, Zoom, Slack and lot of Firefox tabs, but I like that smooth)

The idea is to then only connect to them remotely over a dedicated 10G Network with some sort of software thin client on a lightweight VM and only that has a FirePro V7900 to drive those three monitors permanently. I donā€™t need to see the contents of those VMs all at once, but I need them to be able to run whilst logged into another one and switch within not much time.

A nice stretch goal is to be able to shut down all three VMs on PC2 and instead launch one VM with all three GPUs to get all those glorious CUDA Cores.

Once Ampere Quadros drop, I might even get three identical ones instead. I know Iā€™d need to shut down and reboot to attach an NVLink Bridge if I went that way, but its only a stretch goal and in those cases where it could be useful, Iā€™d take that.

But that would at least be the plan if I could possibly make that happen.

1 Like

To elaborate why:

  1. because it would be cool
  2. because I think it would be fun
  3. because right now I reboot to switch contexts and it is annoying when you need to reboot to do some university related stuff for two hours, followed by 3h working in OpenSuse and ideally at the same time train a model in Pytorch on Jupyter Lab (right now in a Docker container). And maybe take a 1h gaming break or watch Youtube on my Pop!_OS install during a shorter breakā€¦ thatā€™s the dream.

The reason I am moving the Uni VM to PC1 is simply: RAM and Coresā€¦ Iā€™d run out with the supported maximum of an official 128GB on X399.

I am still on this, though the next 4 weeks Iā€™ll mostly focus on finishing all my assignments this semester. The progress I made so far, was realising that Iā€™m in essence building my own VDI setup and I might separate the client and the hosts completely to make it a little easier.
I know @wendell was talking about a VMWare based Nvidia Grid projects and also using AMD for a FOSS implementation of similar functionality.

I might end up going that route, and just shove all my consumer cards into the Deep Learning System and separate it out, since it is accessed through Jupyter Lab anyways. As an ultimate goal Iā€™d love to keep everything in London and access it from Bremen and London so I just have to carry a Laptop when switching between the two (I work in London where I am about to take primary residence again and study in Bremerhaven, with my German residence being in Bremen), without loosing access to my Desktops.

I think it can work just use something like proxmox

Using multi-monitor QXL with remote-viewer and SSH you can accomplish this for a non-gaming VM. First setup QXL for multi-monitor. In windows this means adding a second QML device. Make sure to have enough memory for QXL. You may need to manually set 32MB or 64MB RAM in the XML depending on number of monitors and resolution. For my work laptop I use dual 32MB for 3200x1800 Laptop LCD and UHD second display for windows VM. If it was linux I would have single 64MB.

Then learn how to use remote-viewer to connect to the VM by a fixed port. Setup in the XML a specific port that is unique for each VM. Otherwise one VM might be port 9000 one day and port 9001 the next day.

Then learn how to setup and SSH into your machine remotely. Then you can pass through port 9000 from your ssh server to your remote guest. Then in remote guest run the remote-viewer command to the spice URI. Note that they have a windows guest client also, but I never used it.

This SSH method would be more secure then setting up ports on your router and dealing with all of that mess. Spice would still only allow local connections to the VM.

For gaming ignoring VMs for a moment you could setup OBS streaming as at least on linux it appears to work with any application. With threadripper you would have plenty of extra cpu for software encoding. From the SSH you could somehow launch obs in streaming mode using a profile or use a remote desktop setup to launch it. Then just connect as a streaming client and away you go if your internet connection is decent. CAD and RPG/strategy games would be ok, but FPS would be a no-no probably. Then you could migrate this to a VM but I personally wouldnā€™t expect a GPU passthrough to be perfectly stable remotely. Sometimes you need to reboot the host and on some graphics cards if they crash a certain way the machine needs to be powered down for the fan to work correctly. Sometimes when shutting down a VM it may crash the host. I would want to setup a second computer with a USB dual relay to be able to watch the first computer and control the power button and PSU shutoff also in that type of environment.