VFIO in 2019 – Fedora Workstation (General Guide though) [Branch, Draft]

@BansheeHero at the step where you had us set up the vm for long term use you had us sudo virsh edit win10 but when I go to add <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
it saves but doesnt apply and changes it back to just <domain type-'kvm'>

When I try to add

    <qemu:commandline>
        <qemu:arg value='-cpu'/>
        <qemu:arg value='host,hv_time,kvm=off,hv_vendor_id=null'/>
      </qemu:commandline>

it just gives me an error saying that things aren’t being properly closed. Not exactly sure why neither of these are working.

Did you use virt-manager to make any changes to the VM after the edit? It has this bad habit / bug where it ignores the namespace attribute and simply put thing as

Also take note that you are supposed to edit the existing <domain tag, not add an entirely new one.

For the qemu:command line, try copy and paste the text into a plain text editor, then copy and paste from that to ensure there aren’t any oddities introduced by HTML. I’ve encountered cases where spaces/quotation marks were not exactly the ASCII equivalent when pasting directly from a HTML enabled source.

Last but not least, double check where you placed the qemu block. It should be outside of the devices block and within the domain block, e.g.

  </devices>
  <qemu:commandline>
    <qemu:arg value='-object'/>
    <qemu:arg value='input-linux,id=mouse1,evdev=/dev/input/by-id/usb-Logitech_G400s_Optical_Gaming_Mouse-event-mouse'/>
  </qemu:commandline>
</domain>

@kacyl1 this is what happens with screenshots.
Screenshot%20from%202019-12-30%2007-50-19 Screenshot%20from%202019-12-30%2007-50-47
thanks for helping with this.

This error is usually due to missing xlmns. The first two lines of my VM xml are like this

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>Win10_2019</name>
1 Like

@kacyl1
that seems to have fixed it, I could have swore I had put that in their before. But maybe because I wasn’t putting them in together and trying to save in-between it messed me up.

Now I just need to figure out why my vm isn’t showing my gfx card.

So I performed a fresh install of everything and started over from the beginning of the guide due to me running in to issues with my gpu not being recognized by the kvm. I got up to the step where we modify the vfio.conf and add

force_drivers+="vfio vfio-pci vfio_iommu_type1"
install_items="/usr/sbin/vfio-pci-override.sh /usr/bin/find /usr/bin/dirname"

and where we dracut --force.

after I checked the grep vfio and restarted. The linux os said that it was in emergency mode. I literally followed everything step by step too. Anything I might have been missing to cause that?

Thanks in advance for any help, and happy new year.

In order to properly trouble shoot we’ll need to know why your system booted into emergency mode. You should be able to use the dmesg command to see the boot logs and look for any lines where things seem to have gone wrong.

My bet would be on something missing from the install_items line. Please ensure your system has those files (vfio-pci-override.sh, find, and dirname) and they’re executable.

Happy new year and I hope we can get this working!

When I was following the guide it definitely did not have the vfio-pci-override.sh file. I had to make by myself because it did not exist in the folder. I am running on fedora 31.

This time around during the setup my graphics seem to be in use by nouveau? Is this something I should blacklist?

Edit:

Even after adding a blacklist in my grep file for this its still displaying exactly the same as the image above. Not sure exactly why this is happening this way.

If you do not have two identical GPU, it is not necessary to use the vfio-pci-override.sh method. It would be simpler to use the device ID override.

Another thing is try adding vfio_virqfd to your dracut.conf.d/vfio.conf
It is not in the guide but I saw it was in the list at the archlinux PCI passthrough wiki, e.g.
force_drivers+="vfio vfio-pci vfio_iommu_type1 vfio_virqfd"

In my case I am not using like hardware. I have intel hd graphics that im gonna use on fedora, thats all for coding, then I will pass through the nvidia to the kvm for gaming. Will that work for me since im fedora and not archlinux?

This is exactly the setup I have and the best way is to use the old fashioned block by IDs.
I will try to repost it here.

The new idea from Wendell was to use the location and not the Vendor IDs, this would allow you to have two exact GPUs and still split them.
For a setup like yours it is not necessary.

Ok thanks, yea if you could post it here or maybe link me to a guide that has it I would definitely appreciate that. Thanks.

Hi, I currently have a working setup using Ubuntu 19.10 server, with 3 GPU’s all passed through (so no GPU is left for the system). Could you let me know what I would need to do differently on Fedora 31 to get this working please?

In case it helps anyone - I’ll just note down here some differences in the required commands for Fedora Silverblue users, compared to the guide’s Fedora Workstation commands.

Workstation:

sudo dnf install @virtualization

Silverblue:

rpm-ostree install virt-install libvirt-daemon-config-network libvirt-daemon-kvm qemu-kvm virt-manager virt-viewer

Workstation:

sudo lsinitrd | grep vfio

Silverblue:

That command just didn’t work, as it can’t find the initramfs. I’m not sure what the equivalent command would be. I just skipped that.

Workstation:

sudo dracut --add-drivers "vfio vfio-pci vfio_iommu_type1" --force
sudo dracut --force

Silverblue:

rpm-ostree initramfs --enable --arg=--add-drivers --arg="vfio vfio-pci vfio_iommu_type1"

Workstation:

Adding the intel_iommu=on or amd_iommu=on into GRUB_CMDLINE_LINUX

sudo vim /etc/sysconfig/grub
sudo dnf reinstall kernel
grub2-editenv list

Silverblue:

rpm-ostree kargs --editor
And then add amd_iommu=on iommu=pt using the editor.

Finally, instead of the /usr/sbin/vfio-pci-override.sh script, I added vfio-pci.ids=vvvv:dddd,vvvv:dddd using rpm-ostree kargs --editor, as above.

Workstation:

lstopo is a utility you can

apt install hwloc

Silverblue:

toolbox create
toolbox enter
sudo dnf install hwloc hwloc-gui
lstopo

4 Likes

I’m a bit confused with the instructions on checking topology with hwloc. I’ve installed it and then run it via lstopo. This is the output it gives me:

As you can see, it just shows me one single MemoryModule/node. My guest GPU is PCI 2f:00.0 near the bottom right corner. What am I supposed to learn from this?

  • What cores would be best for the GPU then - or are they just all equivalent on this CPU (3900X)?
  • What do the numbers on the lines mean (e.g. 32 on the line coming off the guest GPU)?
  • Would it be better to pass through 9 cores instead of my intended 8 cores, to fully share the L3 cache?

From what I understand, this is less about your GPU and more about maximizing CPU performance. What you’re supposed to learn here is how your cores are numbered so that you can pass through threads which share a core, cores which share cache, etc. The idea is that the less jumping around the die, the better.

  • I think that anything would be fine for your GPU.
  • I’m not sure, but I would guess it’s IOMMU groups
  • You’re fine not saturating the L3 cache.

I would use cores 0-5, 12-17. This will neatly split your guest and host with 12 threads each, and no overlap on cache.

Hopefully this all makes sense, it can be a lot to take in!

1 Like

thanks guys i finally got my working on Fedora 31 Ryzen 3900x

Hey there, wondering if I have a couple of things set right with regards to CPU pinning and isolation and perhaps those hyperv options too.
My lstopo:

Here’s a portion of my XML:

  <memory unit="KiB">16777216</memory>
  <currentMemory unit="KiB">16777216</currentMemory>
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
  <vcpu placement="static">10</vcpu>
  <cputune>
    <vcpupin vcpu="0" cpuset="1"/>
    <vcpupin vcpu="1" cpuset="7"/>
    <vcpupin vcpu="2" cpuset="2"/>
    <vcpupin vcpu="3" cpuset="8"/>
    <vcpupin vcpu="4" cpuset="3"/>
    <vcpupin vcpu="5" cpuset="9"/>
    <vcpupin vcpu="6" cpuset="4"/>
    <vcpupin vcpu="7" cpuset="10"/>
    <vcpupin vcpu="8" cpuset="5"/>
    <vcpupin vcpu="9" cpuset="11"/>
  </cputune>
  <os>
    <type arch="x86_64" machine="pc-q35-4.1">hvm</type>
    <loader readonly="yes" type="pflash">/usr/share/edk2/ovmf/OVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/win10_VARS.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state="on"/>
      <vapic state="on"/>
      <spinlocks state="on" retries="8191"/>
      <vpindex state="on"/>
      <runtime state="on"/>
      <synic state="on"/>
      <stimer state="on"/>
      <reset state="on"/>
      <vendor_id state="on" value="whatever"/>
      <frequencies state="on"/>
      <reenlightenment state="on"/>
      <tlbflush state="on"/>
      <ipi state="off"/>
      <evmcs state="off"/>
    </hyperv>
    <kvm>
      <hidden state="on"/>
    </kvm>

and the portion of my grub file:

intel_pstate=passive pcie_aspm=off intel_iommu=on isolcpus=1-5,7-11 nohz_full=1-5,7-11 rcu_nocbs==1-5,7-11

What I have attempted is to keep core 0 and its sibling thread for the host while the guest gets the rest of the chip.

These instructions are incomplete and will not work without an additional step.

There is nothing in here causing the script /usr/sbin/vfio-pci-override.sh to execute. To do that, you need to create a file like the following (you see it in the lsinitrd dump in the post):

Open an editor using: sudo vi /etc/modprobe.d/vfio.conf, and paste the following into the file:

install vfio-pci /usr/sbin/vfio-pci-override.sh; /sbin/modprobe --ignore-install vfio-pci

options vfio-pci disable_vga=1

Create that file and include it in your initial ram disk (initram), using dracut, as described in the post.

Also note, this step from the beginning of the post is for intel CPUs:
dmesg | grep -i -e IOMMU | grep enabled
which should show a line of text with the word “enabled” in it.
For AMD CPUs, this is the command:
dmesg | grep -i -e IOMMU | grep "counters supported"
and you should see some text like “AMD-Vi: IOMMU performance counters supported”

I also used this line in my /etc/dracut.conf.d/vfio.conf file:

force_drivers+="vfio vfio-pci vfio_virqfd vfio_iommu_type1"

I don’t know if the added vfio_virqfd is helpful or not.

In the variable GRUB_CMDLINE_LINUX in /etc/default/grub, I appended these commands:

amd_iommu=on rd.driver.pre=vfio-pci

but other guides also recommend also adding iommu=pt to the list once everything is working, which can provide better performance.

I hope this helps someone! FWIW I’m using Fedora 31.

1 Like