On demand PCIe- passthrough

Hi. I was thinking of configuring my main (read: only) PC for PCIe passthrough but wasn’t sure if what I want is actually possible right now (I only suspect it might be) and this is probably the best place to ask.

So what I want to know is: is it possible to pass through a GPU to the guest system when required but also, when the VM is down, to have a usable GPU in the system? The reason why I am asking is that I have a very portable desktop PC(ncase m1) and I can’t add a big card to it so (ss rx550 or gt710 it is) and I like to do gpu-intensive work on my host system from time to time. 1080ti would certainly be more useful than gt710. From the stuff I watched and tried I realize that the driver of the GPU has to be “vfio-pci”, however!

At one point I followed this guide Windows Gaming on Linux: Single GPU Passthrough Guide - YouTube and it didn’t work straight away so I had a combination of that + one of those 2 Yuri / VFIO · GitLab GitHub - joeknock90/Single-GPU-Passthrough can’t remember which since I read both and acted on what I thought was missing from the video guide. This basically does the handoff from the nvidia/nouveau driver to vfio-pci, so maaaaaybe its possible to have both? The only reason I want to switch is because I kinda want to have access to my files and programs on my host system. I sort of like single gpu setup for sandboxing games off my main system and being faster than rebooting and I can get access to some stuff via vnc on Windows guest but it is too similar to dualbooting so idk, I didn’t want to replicate it.

What would be ideal is to have 2 usable cards that can both be used on the host system and only when I feel like playing some games I temporarily dedicate that one card for that specific purpose. Don’t really want to have the more powerful of the two cards being a dead weight 90% of the time. What also gives me hope that it looks like it is doable on the laptop side of things [GUIDE] Optimus laptop dGPU passthrough · GitHub

So I was wondering, has anyone done it? Has anyone tried to do something similar? Is this even possible? If so can you drop maybe some articles, forum posts or something to get me started(except from what I found), maybe someone actually has a guide(search terms get odd)?

Anyways, thanks for reading this far just in case here is my setup:
Ryzen 2700X
Asus Strix X470-i
16gb ram
Nvidia 1080Ti
Arch (or I can switch)

1 Like

Have you considered doing 2 vm’s?
One for gaming, and one for everything else?

Like a dual boot, but keeping the host up?
And just assigning the GPU in both config files?

1 Like

Oh, I haven’t thought of that, interesting idea. So if I understand correctly: have a host system that is basically there to do background tasks while gaming and have 2 vms that both have access to the gpu and when I need to play games I switch to Windows and when I want to do some hardware intensive stuff switch on another VM with linux on it? Well that certainly achieves some of the things I wanted.

Only downsides as far as I see them is CPU overhead since I need to leave some cores to the host system(I’ve read in some guide that that’s not necessary so long as you don’t exhaust all cores but haven’t tested it myself) and disk space but apart from that certainly an option. I’ve seen a guide(again can’t remember I always do that at night for whatever reason) where a guy had two vms on Proxmox I think it was and had GPUs on both and was using looking glass to stream from one to the other VM(now I’m starting to think it was Gnif’s own video hehe).

Though wanted to ask, is all that is required for a successful pass-through is that it says “Kernel driver in use: vfio-pci” for the graphics card in lspci -v command? Because on the laptop I’ve borrowed for testing I’ve managed to get to the bit where I’d boot with nvidia drivers and after starting the vm it would switch to “vfio-pci” or that may well be an error? I didn’t manage to get it all the way there (error 21 I think but it was 5am at that point, I can do better probably) and it wouldn’t load the drivers properly after shutdown. If that is all that is required I could certainly try, if successful I can post a guide here.

I was thinking that the host might act like a dumb hypervisor, like proxmox does, and then you actually use the vm’s to do stuff, but was just an idea . I have no idea about laptops- sounds tricky with two GPUS going to the same screen/output.

For a laptop I would just dual boot

Yeah I don’t intend to do it on a laptop, just thought if I can get it at least to change drivers on the go without crashing the system it could work :grin:

Actually will try the idea with vm once I get the hardware in, not sure if I can add gpu(1080 when windows is not up) while the machine(my main-linux system) is running but that would actually be almost ideal if so.