So, i’m trying to setup a manjaro host and a windows 10 guest with GPU passthrough but im feeling kinda lost,
i have a working windows 10 dual boot that i put in a VM on my host, so far so good, i can boot into windows or i can boot into linux and then start the VM which then boots into windows.
i have 2 GPU’s one is the nvidia 2080 and the other is my Integreted GPU(630HD)
all the guides are talking about isolating the graphics card and binding the card on boot, but no one talk’s about how to switch between the cards when i turn on the VM so i can enjoy playing on linux with 2080 as well as being able to game on windows VM with the 2080 with out the need to reboot so the 2080 bind again to host…
i want to boot into linux with 2080 and when i need to game on windows turn on the VM with 2080 and the linux should start using the 630HD instead of the 2080(i already connected the monitor to the 2080 and 630HD).
can someone point me to a guide on the matter?
something of a hot switch if you will…
You can shut down the vm and unbind vfio, on Nvidia cards only.
You must, however, write scripts to do this.
Not only that, but it’s very buggy because you need to do a bumblebee sort of thing to the gpu in order to get “seamless” rebinding of the gpu to Linux.
I might experiment with this in a few days, I just don’t have the time to do that right now.
They both contain scripts to detach nvidia-gpu from the driver and attach it to vfio-pci.
It seems both of them:
Kill the display manager
Kill the console
Detach the gpu from its driver
Attach the vfio-pci driver
on the fly.
But! remember - You can always just reboot your computer to bind/unbind the vfio-pci driver as described in Archwiki - PCI_passthrough_via_OVMF#Isolating_the_GPU
Just uncomment/comment the kernel parameters “vfio-pci.ids=xxx”
yha its the same link’s you gave me last time but they also dont do a hot swap, you need to shutdown the guest VM and reboot the host to use the 2080 in linux again…
Yes - You would need to shutdown the guest VM to attach your 2080 back to linux. And according to my reading comprehension - both guides scripts do just that without rebooting. They both load the vfio-pci driver when launching the vm and unload it loading the nvidia drivers back when stopping the guest VM.
Joeknock90 does all of this by using libvirt hooks provided by PassthroughPOST/VFIO-Tools
From VFIO-tools:
or bind a device to vfio-pci without needing to reboot
YuriAlek seems do the vfio-pci driver binding by using virsh and commands nodedev-detach, nodedev-reattach
From Joeknock90
When running the VM, the scripts should now automatically stop your display manager, unbind your GPU from all drivers currently using it and pass control over the libvirt. Libvirt handles binding the card to VFIO-PCI automatically.
When the VM is stopped, Libvirt will also handle removing the card from VFIO-PCI. The stop script will then rebind the card to Nvidia and SHOULD rebind your vtconsoles and EFI-Framebuffer.