Fedora 31/32 + Win10 dual boot to VFIO/LG setup

Recently went through the process of converting my system from a Fedora 31 + Win10 dual boot setup to a VFIO/LG setup. Did this with a AMD 3900X, RX5700 and a R7 370. These are the steps I followed, sharing them to document what I did, and in hopes that it might help someone else do the same or similar.

Links

These steps were pieced together primarily from the following links:

Huge thanks to:

Starting Point:

Had Fedora 31 (KDE spin) booting from nvme ssd and WIndows 10 booting from SATA SSD. Was using the UEFI boot menu to switch between OSes.

The end goal was to get Windows 10 running in KVM with a second GPU passed through. And then get Looking Glass working so I don’t need a KVM.

Updates:

  • upgraded to Fedora 32 on host, sudo dnf system-upgrade download --releasever=32, went smoothly
  • added new modprobe config with: softdep amdgpu pre: vfio vfio-pci and options vfio-pci ids=1002:6811,1002:aab0
  • no longer using /usr/sbin/vfio-pci-override.sh
  • changed formatting, moved a lot of the steps into [detail=...] blocks to make it easier to find portions of the config I am reworking

Running Windows 10 in KVM:

First I needed to get Win 10 running as a VM from the existing SSD.

Installing Virtualization

Installed virtualization and made sure virtd was running

dnf install @virtualization
systemctl start libvirtd
systemctl enable libvirtd
Enabling SVM and IOMMU in UEFI

Rebooted to UEFI and enabled SVM
While I was at it I enabled IOMMU as it will be needed later

Created a VM using WIn 10 SSD as boot drive

Booted back into Fedora and identify the Windows 10 drive

ls -l /dev/disk/by-id/
sda: /dev/disk/by-id/ata-CT2000MX500SSD1_1927E210EF0F

Launched virt-manager and select:

  • “New VM”
  • “Import existing disk image”
  • “Existing storage path”: /dev/disk/by-id/ata-CT2000MX500SSD1_1927E210EF0F
  • “Operating system”: Microsoft Windows 10
  • “Memory”: 16384
  • “CPUs”: 8
  • Customize configuration before installation
  • domain type=“kvm”
  • Firmware: UEFI
  • “Apply” then “Begin Installation”

This won’t actually install anything as “Import existing disk image” was selected

Starting the VM in this state going to be painfully slow. Will need virtio drivers to resolve this.

Installing virtio drivers
  • Download the appropriate virtio-win-*.iso
  • Add a small virtio drive to VM
  • Attach virtio-win-*.iso as CD-ROM (or mount the iso from within Windows)

Then boot into the VM, expect it to be slow

  • Boot the VM
  • Ensure you have the virtio iso mounted
  • Install “qemu-ga”
  • Goto hardware manager and install virtio drivers
  • Shutdown windows
  • Remove the virtio drive and switch the windows drive to virtio

At this point I was able to boot Windows 10 using virt-manager or virtsh. CPU virtualization and virtio was working, greatly improving the performance. Was actually usable for some simple apps (file and simple web browsing).

NOTE: First few times I booted the VM is was configured and ended up only using 1 core. The VM was taking a very long time to boot and login to windows, once I got SVM enabled in UEFI and all the virtio drivers installed I was able to boot with many cores in a few seconds.

SSH Server

Didn’t end up using this, but as a security blanket I made sure ssh was working. If I messed up and lost video output on the host I didn’t want to be completely stuck.

Enabling SSH Server
dnf install openssh-server
systemctl start sshd
systemctl enable sshd

Preparing GPU for Pass-through in Host

At this point CPU performance in the VM was looking good. But graphics performance was dismal even for some simple data logging apps I use.

Enablling IOMMU

Adjusting grub options to enable IOMMU at boot and force the vfio-pci driver module to be loaded. In /etc/sysconfig/grub add the the following to GRUB_CMDLINE_LINUX:

  • amd_iommu=on
  • iommu=pt # may improve performance
  • rd.driver.pre=vfio-pci

Then over write grub2-efi.cfg and check that everything applied correctly:

grub2-mkconfig -o /etc/grub2-efi.cfg  # Adjust if not using EFI
grub2-editenv list

saved_entry=875a257d28db4ae3ab6a13a4c5758f10-5.6.13-200.fc31.x86_64
kernelopts=root=/dev/mapper/fedora_localhost–live-root ro resume=/dev/mapper/fedora_localhost–live-swap rd.lvm.lv=fedora_localhost-live/root rd.lvm.lv=fedora_localhost-live/swap rhgb quiet amd_iommu=on iommu=pt rd.driver.pre=vfio-pci
boot_success=1
boot_indeterminate=0

Rebooted to apply the VFIO changes and installed the second GPU. Once rebooted with the second GPU, found the IOMMU groups of each device of interest.

Checking IOMMU Groups and Device IDs

To check the IOMMU of the devices can use this script:

#!/bin/bash
for d in /sys/kernel/iommu_groups/*/devices/*; do
  n=${d#*/iommu_groups/*}; n=${n%%/*}
  printf 'IOMMU Group %s ' "$n"
  lspci -nns "${d##*/}"
done | sort -V

Running this I was able to find my 3 devices of interest:

  • USB Controller

IOMMU Group 24 24:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014] (rev 03)

  • RX 5700 (for host)
IOMMU Group 28 2f:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT] [1002:731f] (rev c4)
IOMMU Group 29 2f:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 HDMI Audio [1002:ab38]
  • R7 370 (for VM)
IOMMU Group 30 30:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Curacao PRO [Radeon R7 370 / R9 270/370 OEM] [1002:6811] (rev 81)
IOMMU Group 30 30:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Oland/Hainan/Cape Verde/Pitcairn HDMI Audio [Radeon HD 7000 Series] [1002:aab0]

Used lspci to double check the IOMMU group my devices:

24:00.0 USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03)

2f:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT] (rev c4)
2f:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 HDMI Audio

30:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Curacao PRO [Radeon R7 370 / R9 270/370 OEM] (rev 81)
30:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Oland/Hainan/Cape Verde/Pitcairn HDMI Audio [Radeon HD 7000 Series]

The device I want to pass-through is the R7 370 which has device IDs: 1002:6811 and 1002:aab0. And IOMMU Groups: 2f:00.0 and 2f:00.1 (aka 0000:2f:00.0 and 0000:2f:00.1).

Confirmed IOMMU enabled at boot
dmesg | grep -i -e IOMMU
[    0.000000] Command line: BOOT_IMAGE=(hd2,gpt2)/vmlinuz-5.5.15-200.fc31.x86_64 root=/dev/mapper/fedora_localhost--live-root ro resume=/dev/mapper/fedora_localhost--live-swap rd.lvm.lv=fedora_localhost-live/root rd.lvm.lv=fedora_localhost-live/swap rhgb quiet amd_iommu=on iommu=pt rd.driver.pre=vfio-pci
[    0.000000] Kernel command line: BOOT_IMAGE=(hd2,gpt2)/vmlinuz-5.5.15-200.fc31.x86_64 root=/dev/mapper/fedora_localhost--live-root ro resume=/dev/mapper/fedora_localhost--live-swap rd.lvm.lv=fedora_localhost-live/root rd.lvm.lv=fedora_localhost-live/swap rhgb quiet amd_iommu=on iommu=pt rd.driver.pre=vfio-pci
[    0.253016] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.636293] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.636355] pci 0000:00:01.0: Adding to iommu group 0
[    0.636375] pci 0000:00:01.1: Adding to iommu group 1
...
[    0.637485] pci 0000:33:00.0: Adding to iommu group 35
[    0.637504] pci 0000:34:00.0: Adding to iommu group 36
[    0.637664] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    0.638217] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[    1.696302] AMD-Vi: AMD IOMMUv2 driver by Joerg Roedel <[email protected]>

Originally I used the IOMMU groups and a vfio-pci-override.sh script to pass the GPU through the host kernel. Using the IOMMU has the advantage that the device will be passed based on the PCIE slot. This would be more important if I had identical GPUs. But after some more researrch I decided this was a bit too jank.

I am now using the device IDs instead of IOMMU groups to specify devices to vfio-pci. And have them configured directly in /etc/modprobe.d/vfio.conf.

vfio-pci config (old)

To pass the R7 370 to KVM, create /usr/sbin/vfio-pci-override.sh and add contents:

#!/bin/sh
PREREQS=""
DEVS="0000:2f:00.0 0000:2f:00.1"
for DEV in $DEVS; do
        echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
done

modprobe -i vfio-pci

To trigger the override.sh script create /etc/modprobe.d/vfio.conf and add:

install vfio-pci /usr/sbin/vfio-pci-override.sh; /sbin/modprobe --ignore-install vfio-pci

options vfio-pci disable_vga=1

Then check if vfio drivers are in initram

lsinitrd | grep vfio

To add drivers once

dracut --add-drivers "vfio vfio-pci vfio_iommu_type1" --force

To add permanently, create /etc/dracut.conf.d/vfio.conf with:

force_drivers+=“vfio vfio-pci vfio_iommu_type1”
install_items="/usr/sbin/vfio-pci-override.sh /usr/bin/find /usr/bin/dirname"

vfio-pci config (new)

To pass the R7 370 to KVM, create /etc/modprobe.d/vfio.conf and add:

softdep amdgpu pre: vfio vfio-pci

options vfio-pci disable_vga=1

options vfio-pci ids=1002:6811,1002:aab0

Then check if vfio drivers are in initram

lsinitrd | grep vfio

Add drivers once

dracut --add-drivers "vfio vfio-pci vfio_iommu_type1" --force

Add permanently, create /etc/dracut.conf.d/vfio.conf with:

force_drivers+=" vfio vfio-pci vfio_iommu_type1 "

NOTE: the white space is needed or dracut will complain

Rebuilt initram and check it for vfio-pci
dracut --force

Rechecked if vfio drivers are in initram

lsinitrd | grep vfio

-rw-r–r-- 1 root root 109 May 29 12:35 etc/modprobe.d/vfio.conf
drwxr-xr-x 3 root root 0 May 29 12:35 usr/lib/modules/5.7.6-201.fc32.x86_64/kernel/drivers/vfio
drwxr-xr-x 2 root root 0 May 29 12:35 usr/lib/modules/5.7.6-201.fc32.x86_64/kernel/drivers/vfio/pci
-rw-r–r-- 1 root root 27080 May 29 12:35 usr/lib/modules/5.7.6-201.fc32.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz
-rw-r–r-- 1 root root 13836 May 29 12:35 usr/lib/modules/5.7.6-201.fc32.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz
-rw-r–r-- 1 root root 12736 May 29 12:35 usr/lib/modules/5.7.6-201.fc32.x86_64/kernel/drivers/vfio/vfio.ko.xz
-rw-r–r-- 1 root root 3208 May 29 12:35 usr/lib/modules/5.7.6-201.fc32.x86_64/kernel/drivers/vfio/vfio_virqfd.ko.xz

Rebooted and confirmed vfio-pci working
lspci -nnv
            
25:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014] (rev 03) (prog-if 30 [XHCI])
        ...
        Kernel driver in use: xhci_hcd

2f:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Curacao PRO [Radeon R7 370 / R9 270/370 OEM] [1002:6811] (rev 81) (prog-if 00 [VGA controller])
        ...
        Kernel driver in use: vfio-pci
        Kernel modules: radeon, amdgpu

2f:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Oland/Hainan/Cape Verde/Pitcairn HDMI Audio [Radeon HD 7000 Series] [1002:aab0]
        ...
        Kernel driver in use: vfio-pci
        Kernel modules: snd_hda_intel

GPU Pass-through

Now that the linux kernel is not loading a driver for the VMs GPU, using vfio-pci instead, can now get the VM to connect directly to IOMMU group 2f:00 and load it’s driver for the GPU.

Backing up VM xml

Before proceeding probably a good idea to backup the xml config of the vm:

virsh dumpxml nameofvm >  /somewhere/safe/nameofvm.xml    
Configuring VM for Pass-through

Using virt-manager, edit the Windows 10 VM’s config can now remove surperfluous devices from VM

  • QXL Video
  • Spice Display
  • Console
  • Channel Spice

Make sure the folloing devices are present

  • QXL Video device (with type=none)
  • Spice Display
  • PS/2 Mouse
  • Virtio Keyboard

Add the GPU

  • Add Hardware
  • PCI Host Device
  • Select the appropriate device id: 0000:2f:00:0
  • Repeat for the audio device: 0000:2d:00:1
  • Also add a USB controller, repeat for device: 0000:24:00:0

With a second keyboard and mouse connected to the USB card and the second GPU connected to another monitor (or another output of an existing monitor). Should be able to launch the VM and control Fedora with the first keyboard and mouse and the Windows 10 VM with the second keyboard and mouse.

Can also control the VM with the main keyboard and mouse by keeping the cursor over the VMs window. But it is very finicky.

Looking Glass

Time to ditch the second keyboard, mouse and monitor. And even the extra USB card.

Added Shared Memory File for LG

Need a shared memory file for LG to copy fame buffers to and from. The size of the file needs to be:

  • bytes_per_frame = width x height x 4 x 2; 2560 x 1440 x 4 x 2 = 29491200 B
  • megabytes_per_frame = bytes_per_frame / 1024 / 1024; 28.125 MB
  • needed_megabytes = ceil_base_2(megabytes_per_frame + 2); 32 MB

To have tmpfiles.d create the shared memory file at boot. Created /etc/tmpfiles.d/10-looking-glass.conf and added:

#Type Path                      Mode  User    Group  Age  Argument
f     /dev/shm/looking-glass    0660  user  kvm    -

To have systemd-tmpfiles create the file immediately run
systemd-tmpfiles --create /etc/tmpfiles.d/10-looking-glass.conf

Resoled SELinux Issues

SELinux was preventing qemu from accessing the shm file. To tell SELinux to allow the qemu user to access the vfio shm:

ausearch -c ‘qemu-system-x86’ --raw | audit2allow -M my-qemusystemx86
semodule -X 300 -i my-qemusystemx86.pp
setsebool -P domain_can_mmap_files 1
Built Looking-Glass on Host
dnf install git make cmake binutils-devel SDL2-devel SDL2_ttf-devel nettle-devel spice-protocol fontconfig-devel libX11-devel egl-wayland-devel wayland-devel mesa-libGLU-devel mesa-libGLES-devel mesa-libGL-devel mesa-libEGL-devel libXfixes-devel
git clone --recursive https://github.com/gnif/LookingGlass.git
cd LookingGlass
git checkout B1  # Building master requires "xi"?
mkdir client/build
cd client/build
cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo ../
make
ln -s $(pwd)/looking-glass-client /usr/local/bin/
Configure VM to use shm file

Added this shmem block the devices section of the VM xml:
$ virsh edit [vmname]

...
<devices>
    ...
    <shmem name='looking-glass'>
        <model type='ivshmem-plain'/>
        <size unit='M'>32</size>
    </shmem>
</devices>
...

Add shared memory driver in Windows guest:

Then, install LookingGlass in Windows

  • Download the matching ‘B1’ release from here
  • To shcedule LookingGlass to launch on boot, launch cmd as administrator, and run: SCHTASKS /Create /TN "Looking Glass" /SC ONLOGON /RL HIGHEST /TR C:\Users\<YourUserName>\<YourPath>\looking-glass-host.exe
  • Or schedule it using the Schedule Manager
  • Start Looking Glass host by running the exe or by starting the scheduled item in Schedul Manager
    • If looking-glass-host.exe immediately exits check the resolution of the VM. Mine somehow set itself ot 4k and overran the 32MB shm I had configured.
  • With looking-glass-host.exe running on the guest, launch the Looking Glass (/usr/local/bin/looking-glass-client) on the host.

Was then able to disconnect the second keyboard, monitor and mouse; and use Looking Glass instead. I saw people mentioning they noticed input lag using the hosts keyboard and mouse in their setups. I have not noticed any lag, but also have not been playing twitchy FPS games on the Windows 10 VM. :man_shrugging:

Notes:

Setting schema domain

I did not need this, but was suggested in a number of places to modify the domain tag of hte VM’s XML file:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
VM CPU Config
  • “copy host CPU configuration”
  • Make sure topology makes sense (for 3900X): cores=12, threads=2
  • Set an allocation if you want
VM Memory Config
  • Current allocation: 8-16GB (8192-16384)
VM Controller USB Config
  • Model: USB 3
VM RNG Device
  • Host Device: /dev/urandom
6 Likes

So, this is one way to do it.

A better way would be to use modprobe driver prereqs. (I can run you through this when I get to a PC)


All in all, great guide!

1 Like

Do share, always interested in hearing better ways.

Mostly documenting what I did not really meant to be a “guide”. Though I suppose it sort of became that.