For me, the VFIO with VMs is an awesome way to use an OS I prefer (Fedora) at work. Unfortunately, some software I need for design, so 3D heavy, requires Windows - wine could be used for all of them with great success, (I tested it before), but I use some USB peripherals that do not work with Wine. So VFIO it is. Yes, I use VFIO at work daily
Important distinction from most (if not all) of the guides, I do not run my VMs as root. I run them as my regular user, which forces some limitations. As far as I know you still can’t use virtiofs to share directories with the host, but for me this is a non-issue, the passed-through devices must be owned by your user (which is taken care of by udev config) and I needed to create an alternative to vfio-isolate, which you can find on my gitlab (I can’t paste links here - just replace the space with dot: gitlab com/krokodyl1220/vfio-vm-tools). This simple script handles CPU isolation and allows the VMs to have great performance.
My work PC config is:
- Ryzen 5700G
- Gigabyte Aorus B450 Elite
- 64G of RAM
- Nvidia GTX 3050 for Windows guest (in the primary GPU slot)
- Radeon RX550 for Linux guest (using NVME → PCIE converter)
I also pass two onboard USB controllers, one to Windows and one to Fedora, and that allows me to have dedicated USB hubs for both VMs. On my motherboard the primary GPU slot is in the separate IOMMU group, the same as the primary NVMe slot, so these were the sensible choices for GPU connection.
My host OS is OpenSuse Leap 15.4, but any relatively stable distro will do (such as Alma, Debian or something similar). In the past I used Arch but I would not dare to do it today
I use two VMs daily:
- Windows 10 - for a few pieces of SW that need it
- Fedora - for all the rest of the work.
This approach allows me to use a fast-changing distro such as Fedora for having fresh packages without having to worry about the updates breaking the GPU passthrough. I also run a few VMs inside my Fedora VM, so nested virtualization also works great.
For networking I setup 2 network bridges on my host, each as a separate subnet - one is for direct communication between Linux VM and Windows VM (Linux acts as a SMB server, hosting design files accessed by my Windows software), and one is setup between Linux VM and host, to allow the Linux VM to access Windows guest SPICE for keyboard and mouse. For connecting to the Windows guest I use Looking Glass from my Linux VM, and I pass my peripherals between my host and Linux guest using evdev. Thanks to this I can use a single cable for each peripheral and can very quickly switch if I need to access a host. One limitation - evdev doesn’t handle keyboard indicators, so capslock and other indicators on regular keyboards do not work.
This setup is rock solid, has saved me from fixing my systems a few times (if something goes wrong with the guest - just restore the snapshot and you are good to go) and allows me to use Windows software feeling as if I was using Linux