GPU passthrough on a system with only one GPU

I’m currently waiting on parts for my new machine. It’ll be a R9 5900x with a RX 6800xt on a gigabyte X570 aorus elite wifi.

I want the host system to be headless and run a hypervisor and host network attached storage which would be used by all VMs to store user files.

For each task I want to do, I’d boot up an appropriate VM which would get the GPU and peripheral devices passed through to it.

The host system would have to be controlled via SSH or VNC from the VMs.

I have a couple of questions about the feasibility of my proposed setup:

  1. Is it difficult to manage several VMs which all get the same hardware passed to them? (I’m assuming they couldn’t be running at the same time)
  2. Can one take a snapshot of the current state of a VM including its memory footprint instead of shutting down and booting the VM each time?

Looked a bit into it. Its an interesting idea. Balance workloads like photoshop in one VM, shut it down switch to another for VM that allows you to play games, etc etc.

Difficult? Depends if you ever did pci-e passthrough. Once you know how to do it, its pretty easy to have multiple VM’s Use the same passthrough hardware. The issue I would be concerned about is the reset bug. I don’t know if the 6800xt has that bug or not, I think I saw something about it, and getting around the issue.

Looking into that, what you would be doing is enabling hibernation in the VM’s itself. So you could tell the VM to hibernate the system. Start up the next VM and be able to hibernate that one. Depending on what systems OS you want to use, there may be some annoyances enabling the hibernation.

I found a Reddit post (which is about 4+ years old) that said the person successfully got it to work. With windows and I think fedora, he said he had to add some lines to his qemu/kvm xml file.

The biggest thing I found when he talked about it, it doesn’t sound 100% reliable, which I have found on bare metal systems with hibernate. Sometimes you try to boot the system from hibernate, and it just starts the system normally with nothing saved.

So it might be a deal breaker, if it just doesn’t restore the state sometimes.

Its also possible it would use a lot of IO storing the ram to disk and then restoring it to each VM. Depending on how often / how much ram / data is stored, could cause some wear on a SSD.

Keep in mind I have not tried this myself. But i have had more than one VM with passthrough in past, one windows 7 (Had office installed) and one that was windows 10 (games). I could easily switch back and forth without much trouble.

I am currently using proxmox as a headless hypervisor, and have a VM that I pass through an RX460 to (making use of vendor-reset to deal with amd’s bug with cards older than the 6000 series). My machine is in the garage, so I use a long-ass fiber displayport cable from the card to the monitor, and a network compatible extron usb extenders from ebay (most only work with a dedicated line) that runs over my also fiber cable network.

Took a bit to problem solving (among other things, had to disable PCIe power management to stop from having GPU errors in dmesg and occasionally crashing), but otherwise works great. I haven’t yet truly sat down and tested/optimized things like CPU and memory pinning, or different vm image file formats on different types of storage. I am using it every day though just fine for normal office and browser workloads.

Take note, even if the machine is in your room, you’ll absolutely want to have a laptop or other barebones machine nearby to access the hypervisor if there’s an issue with the VM running.

For linux or windows, if you take good notes it’s all just copy paste once you do the initial problem solving, so managing them is easy enough.

My system is on 24-7, so I just sleep firefox and leave the VM on. I’m not sure how well sleeping/hibernating VM’s works currently, in the past I’m told it wasn’t great.

1 Like

OT really, but did you try this? https://addons.mozilla.org/en-US/firefox/addon/total-suspender/
I find it really useful with tons of tabs opened, and with all tabs suspended there is no ~3% load from FF.

As for OP questions:

No just copy pasta, like Log said.

You can save VM state (not sure with KVM, but with Xen definitely, its Type1 HV).
However, with real hardware passed to it, you will have troubles with that hardware state. It’s not really hypervisor limitation. Consumer hardware is just not designed to be shared between machines.
You would probably need equipment with SR-IOV to share it between VM’s at the same time.

And “At the same time” means “beetwen stopped/saved VMs” too. Because unless you’re going through full “boot-shutdown” cycle then that hardware will be in random undefined state probably.

On the other note, from SSD/NVME Windows VM boots in less than 10 seconds, so…

This is a good point, I was actually thinking of using a raspberry Pi for this.

If there is a good way to quickly switch between which VMs I’m running I will set up a set of physical radio buttons (via the GPIO pins) for VM selection.

This is the main problem with my plan. It makes sense that the boot/shutdown process is important for the GPU especially.

It’s starting to sound like I’ll end up with something that’s just a dual boot setup with a small performance penalty for the hypervisor.

I’m starting to think that I should build a seperate box for my NAS and run a multiple boot setup with UEFI on the bare metal instead.

Best if you just said what you planning to use this machine for.

Because if this supposed to be your daily driver/desktop then I find that just using any linux desktop for coding/browser/mail etc. with ability to spawn Windows gaming VM if I need it, is best solution. And easiest way to do that is to add cheap, low end gpu to your build. Although it can be done with single gpu, just bit harder.

NAS is really easiest thing to virtualize, if you just can fit your drives in the case youre using. But since in setup I mentioned above, you already have Linux as desktop, you can just install samba and youre done. No need for virtualization of NAS at all.

Most people new to virtualization get hung up on overengeenering their solutions. Linux desktop can probably do 90-99% things you need. Other 1-10% can be met with Windows VM.

Well, it’s mainly going to be a FEA/CAD workstation. I want to move to using open source solutions as much as possible, but for some things I’ll still need to use commercial software. I also am going to game in it.

So the environments I want are:

  1. Arch Linux for everyday use and programming
  2. CentOS 7 with ABAQUS and Unigraphics NX
  3. Windows 10 with SolidWorks
  4. Windows 10 with games

Each environment wants the nice GPU when I’m using it.

Fair, if I wasn’t stuck waiting for my CPU and GPU I’d probably just be hacking away at it instead of over thinking it.

Yeah, so I would do Arch/Manjaro as desktop with KVM with some old radeon hd or gtx you can get for pennies. And I would use Centos/Windows VMs on top of that passing rx6800 between them as needed.

And If I needed rx6800 in host os (arch) for example for OpenCL/ROCm I would just rebind it from vfio driver to amdgpu when needed.

Monitor(s) with multiple inputs and Barrier for mouse/keyboard are helpful in this setup.

Also host os (arch) would serve as NAS for VMs and other machines in your network.

EDIT: One more option, depending on your CAD processing req you may consider old quadro/radeon pro and use SR-IOV in host and CAD vm-s.

That’s an interesting prospect, I could implement that with systemd targets I think VFIO would get the GPU in the multiuser target, and amdgpu would get it in the graphical target

It’s a possibility. But its probably easier to do with libvirt/qemu hooks.

This is with single GPU, with extra cheap host gpu it will be way easier:

1 Like

@misiektw the solution you linked looks pretty nice, I’ll try it out when I get my hardware