Fedora 33: Ultimiate VFIO Guide for 2020/2021 [WIP]

I think this more true that you meant. Later you must have update grub.cfg either manually or unintentionally.

Notes:

Use any of these if you want.

Checking if you have booted with correct options:

cat /proc/cmdline

My output:

BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.9.11-200.fc33.x86_64 root=UUID=aa16dfce-be02-4f48-a32d-ee26ae99a48d ro rd.driver.blacklist=nouveau modprobe.blacklist=nouveau resume=UUID=ee7a7237-a530-4c37-8c62-79b33ddef287 rhgb intel_iommu=on vfio-pci.ids=8086:a2af,10de:1b80,10de:10f0

Making sure your config is correct before you reboot

sudo grubby --info=DEFAULT | egrep --color '^|intel_iommu=on|amd_iommu=on|rd.driver.pre=vfio-pc'

My output:

index=0
kernel="/boot/vmlinuz-5.9.11-200.fc33.x86_64"
args=“ro rd.driver.blacklist=nouveau modprobe.blacklist=nouveau resume=UUID=ee7a7237-a530-4c37-8c62-79b33ddef287 rhgb intel_iommu=on vfio-pci.ids=8086:a2af,10de:1b80,10de:10f0 isolcpus=1,2,3,5,6,7”
root=“UUID=aa16dfce-be02-4f48-a32d-ee26ae99a48d”
initrd="/boot/initramfs-5.9.11-200.fc33.x86_64.img"
title=“Fedora (5.9.11-200.fc33.x86_64) 33 (Workstation Edition)”
id=“ece237a53a024f1da19a8444d8979947-5.9.11-200.fc33.x86_64”

As you can see:

  • I have intel_iommu=on because I am on Intel
  • I am missing forcing the preload of VFIO driver rd.driver.pre=vfio-pc

Adding arguments the old fashion way

Adjust the line in /etc/default/grub and save it.
This will not update the config, it needs to be applied.

sudo [ -e "/boot/grub2/grub.cfg" ] && sudo grub2-mkconfig -o /boot/grub2/grub.cfg;
sudo [ -e "/boot/efi/EFI/fedora/grub.cfg" ] && sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

This will generate config manually, you can already guess this is clumsy. You could use find instead.

My suggestion - use grubby

Grubby allows to check and update individual kernel options as well as update default ones.

Setting up IOMMU kernel parameters

Because I am on AMD, run the following:

sudo grubby --update-kernel=ALL --args="amd_iommu=on rd.driver.pre=vfio-pc"

In order to confirm we are on the right track:

sudo grubby --info=DEFAULT | egrep --color '^|intel_iommu=on|amd_iommu=on|rd.driver.pre=vfio-pc'

You will see a line with all kernel parameters and your modules should be highlighted:


args=" … rhgb amd_iommu=on rd.driver.pre=vfio-pc"

Your /etc/default/grub has been updated automatically.

If you have made a mistake then you can run the following: (In this example adding only pre=vfio-pc instead of the full rd.driver.pre=vfio-pc)

sudo grubby --update-kernel=ALL --remove-args="pre=vfio-pc"
3 Likes

Upgraded my setup to Fedora 33, no problems with VFIO, small problem with package conflict and resolved inside of DNF with --best --allowerase

That said I also have some performance numbers for my 7700K as this CPU is just barely good enough for VR. Regular games do not need to meet 11ms to render a frame for a smooth experience. Most games would just run slow.

Default setup with Looking Glass, Spice Channel and VirtIO serial scored below 4000 in VRMark Demo.
This is unplayable in demanding games and games running through SteamVR in Oculus headset.

Turning off VirtIO and Spice channels net me 700 points and made the experience change from unusable to bad.

Further closing all sessions with LG I achieved over 5000 points and reaching a level of bad gaming laptop :slight_smile:

To give you perspective, bare-metal score would be between 11000 and 14000.

Now I used grubby to isolate cores in Linux Host, leaving only first core to the Host.

sudo grubby --update-kernel=ALL --args="isolcpus=1,2,3,5,6,7"

Note that my XML was pinned the whole time like this:

    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='6'/>
    <vcpupin vcpu='4' cpuset='3'/>
    <vcpupin vcpu='5' cpuset='7'/>
    <emulatorpin cpuset='0'/>
  </cputune>

After this I was able to reach scores up to 9000 making the experience fine.
So far this is evidence of following:

  • Always check you VFIO and Overclocking against previous runs. Do not expect linear gains or losses.
  • Linux scheduler + QEMU are not fast enough to adjust in time for VR:
    • Linux scheduler is not fast enough to free cores for VR workload.
      Takes minutes in the game for performance to improve.
    • Linux scheduler is fast enough to notice a dip in VR workload and move the host tasks back to the cores used by VR.
      Even when performance stabilizes, you will eventually get the stutter again and it will take minutes to resolve again.

Hope this helps someone when diagnosing CPU limitations and losses in VFIO.

2 Likes

Interesing results on the scheduler slowness, thanks for sharing! One thing I noticed is that isolcpus is now depreciated in what looks like the 5.x series kernels.

isolcpus=       [KNL,SMP,ISOL] Isolate a given set of CPUs from disturbance.
                        [Deprecated - use cpusets instead]
                        Format: [flag-list,]<cpu-list>

So this solution may be yanked out from under you eventually.

Cset shield seems like a possible solution: https://www.codeblueprint.co.uk/2019/10/08/isolcpus-is-deprecated-kinda.html

And this is an example of how to use taskset in proxmox (which is what I use for a host): https://forum.proxmox.com/threads/cpu-pinning.67805/#post-304715

While I’m here, anyone have a good way of benchmarking this GPU/latency stuff in a linux guest?

I think it should not be necessary going forward. Besides benchmarks, where I expect to find results - most games do not really care. (VR games are scaled down to hit 90Hz VSync)
The only game that is unplayable atm is Half-Life Alyx. The game behaves like a power virus and loops on itself leading eventually to a crash.

I believe most synthetic benchmarks work fine in Linux through support layers like Wine.
Not sure about third-party measuring tools like RivaTuner.
The value you are looking for is frame-time. Frame target is 8ms and hard-limit is 11.11. Runtime can smooth out even 13ms

Maybe easiest way to test my problem on Linux - just run a power virus cycle in Guest and see how quickly other tasks move away from this thread. (Prime, Render …)

The behavior I expect is that if 6 threads are pinned down to 100%, tasks are moved to the other 2 and are spawned there. (In general). Maybe it is just my ignorance :slight_smile:

1 Like

Now the legend continues, well kind off fizzles. I am done with that game, maybe as a test tool :smiley:

I got my score over 10000 compared to 12000 on baremetal. I have beaten the score on stock CPU multiplier. All the other games in my library work flawlessly with VFIO.

Switched to barebone - Alyx runs fine and I get down to high settings and 5-6ms average.

Before I finished with testing I decided to play a little. After getting stuck inside objects 17 times and having to run noclip just to get out - I am done. It was more fun trying to get VFIO to achieve better results than playing the game. Now playing Lone Echo and Subnautica without leaving my trusty workstation setup.

1 Like

Here you say to do 20 for the folder for priority but then the rest is 30 if I am reading it right? I am probably going to set redo the folder as 30 but wanted to mention it in case it is a problem.

Should be fine, was a typo

1 Like

I am not seeing the enabled when I run this command after completing these steps. If I remove the last grep I get the following output shortend.

[    2.164463] iommu: Default domain type: Translated 
[    2.380366] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    2.380410] pci 0000:00:01.0: Adding to iommu group 0
[    2.380419] pci 0000:00:01.1: Adding to iommu group 1
...
[    2.381188] pci 0000:0e:00.3: Adding to iommu group 33
[    2.381202] pci 0000:0e:00.4: Adding to iommu group 34
[    2.384780] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    2.385949] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).

Did you resolve this? I am currently having the same issue (I don’t see any mention of IOMMU enabled), when I run sudo lsinitrd | grep vfio. After running dracut -fv I am missing etc/modprobe.d/vfio.conf but all other files are listed. Finally, after rebooting and running lspci -nnv and looking at my devices I see that the vfio-pci driver is not being used. I’d be curious to know how you resolved your issues. Thanks.

Followed this guide for customizing my initramfs since I use dracut in Archlinux and found these:

The module-setup.sh needs to be located under the new custom module directory, in your example /usr/lib/dracut/modules.d/20vfio/module-setup.sh (or /usr/lib/dracut/modules.d/30vfio/module-setup.sh if you stick to 30 priority)

The dd_dracutmodules+=" vfio " should be add_dracutmodules+=" vfio ".

When executing the dracut command check for a line in info logs (-v not required) which loads the custom module:

[..]
dracut: *** Including module: vfio ***
[..]

To verify after reboot check which modules each GPU has loaded:

sudo lspci -v -d 10de:1e84 | grep -E '(VGA|driver)'
0c:00.0 VGA compatible controller: NVIDIA Corporation TU104 [GeForce RTX 2070 SUPER] (rev a1) (prog-if 00 [VGA controller])
        Kernel driver in use: vfio_pci
1 Like

Hmm, does this Navi/Vega reset fix work for Polaris or no? I’ve got a reference RX 580 I’d like to use with a VFIO setup, but this GPU is real funky about passthrough.

Vendor-reset has worked for my rx460 for months now, with the caveat that I also needed to disable pcie power management in bios and systemd boot because that was crashing the card into an unfixable state.

Could you elaborate on the systemd boot issue?

Proxmox (my host) when installed on an efi partition ignores grub config options and used systemd-boot instead: Host Bootloader - Proxmox VE

Other distros likely still use grub only.

2 Likes

Thanks for the detail, I had been toying with the idea of using Proxmox at home as well as this is good to know.

Thank you!!! Your corrections got it working for me.

1 Like

People are reaching out to me about Q35 vs i440 and people having trouble with Q35.

Is anyone able to spot check QEMU through it’s versions of Q35 vs i440 compatibility? I know it flip flops depending on which QEMU version you have.

1 Like

Having some trouble with this command:
dmesg | grep -i -e IOMMU | grep enabled

I get no output at all. However if I run ‘dmesg | grep -e AMD-Vi’ I get this output:

[    0.542866] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.542882] AMD-Vi: Lazy IO/TLB flushing enabled
[    0.543769] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    0.543770] AMD-Vi: Extended features (0xf77ef22294ada): PPR NX GT IA GA PC GA_vAPIC
[    0.543774] AMD-Vi: Interrupt remapping enabled
[    0.543775] AMD-Vi: Virtual APIC enabled

So is IOMMU working or not? Fedora 33 on an OEM Dell X370 motherboard with a Ryzen 7 1700.

Kernel options line in /etc/default/grub is as follows:
GRUB_CMDLINE_LINUX="rhgb quiet amd_iommu=on iommu=pt rd.driver.pre=vfio-pci vfio-pci ids=1002:687f,1002:aaf8 video=vesafb:off video=efifb:off rd.driver.blacklist=nouveau,amdgpu,snd_intel_hda modprobe.blacklist=nouveau,amdgpu,snd_intel_hda resume=UUID=3ddd486c-7dac-4691-9096-8e2bc929c5ac"

Too much?

Since I didn’t find another (more up to date) thread I’m expecting this to be it…

My System Info:

OS: Fedora Linux 38 (Workstation Edition)
KERNEL: 6.4.6-200.fc38.x86_64
CPU: AMD Ryzen Threadripper 3960X 24-Core
GPU: AMD Radeon RX 6800 XT
GPU: NVIDIA GeForce GTX 1050 Ti
RAM: 256 GB
DE: Cinnamon (xorg)

Generally when I try to free 1 (the AMD Card) of my 2 gpus either by command or the described dracut module following happens:

  1. the gpu stays dark/off (this might be by design or a sideeffect, I can’t tell from the articles)
  2. fedora shows the login page on the secondary gpu - even when I was logged in before
  3. fedora login page refuses the login … although it looks more like it accepts it and the DE crashes right after

Anyone got an idea what the problem could be? - or is that just an unsupported usecase

AMD GPUs are plagued with reset issues for VFIO passthrough due to lack of care of supporting this use case by AMD, and should be avoided.