Ubuntu 17.04 -- VFIO PCIe Passthrough & Kernel Update (4.14-rc1)

Just an FYI, on 18.04.1, I had to install the latest 4.18-rc8 kernel to get everything working correctly. I’d get issues with SEGFAULTS in libvirtd when using Virtual Machine Manager. So far, things have been good on that kernel. :slight_smile:

hmmm, could be why it was fucked up in Elementary OS… sadly, i cant test it this week, cuz moving stuff in my room, maybe on sunday if i get lucky :open_mouth:

just a quick question,

do I need dual GPU cards installed to passthrough? or can I passthrough a single card?

If i use dual cards, which would I passthrough if one is older then the other?

No and yes.
Technically you can pass-through a single GPU, but then you don’t have anything to display the host. You can run the host headless, and put a monitor on the GPU and it will display the guest only.

Depends on your needs. If you need the faster card on the VM, use the faster card, otherwise use the slower one. There’s no need to pass through a specific card.

1 Like

thanks @mihawk90

I’ve just finished up setting up my Ubuntu 18.04 LTS machine with a Windows 10 Guest VM that uses Looking Glass, and I’ve written down in extreme detail all of the problems I ran into while attempting to do so, here:

Hopefully some of you will find this useful!

i pretty much rewrote and updated this tutorial (with a bit detailed guide in some parts) for Elementary OS 5.0, but i got stuck cuz my second GPU i passed is too old to initialize correctly :smiley: but ubuntu is able to re-init on boot, not windows tho, so i gotta get better gpu then update it and publish it on forums then

Hi guys.

My windows 10 install recently borked after another update gone wrong… Third time in a year… So I’ve decided to check again if I could maybe switch to Linux and still game sometimes (Warframe mostly), and down the GPU passthrough rabbit-hole I went.

I’m writing this because I finally got it to work and I haven’t seen a guide (at least a pretty recent one) to this particular combo of hardware. And also because I want to write down the steps so I can do it again if I have to :slight_smile:

So my Hardware/Software :

Intel CPU (i5 6600K)
ASUS MB (z170 Pro Gaming)
Host GPU : idGPU
Guest GPU : Nvidia Card (GTX 1070)
UBUNTU 18.04

I won’t go into super detail of all you have to do, this existing guide is pretty much all you have to know, this is just what you have to do to make it work with this particular combo of hardware/software. AGAIN, PLEASE READ ORIGINAL GUIDE SO YOU KNOW WHAT YOU ARE DOING.

Without further adoo :

sudo nano /etc/default/grub

locate this line and modify it like so, and of course add your ids.

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash iommu=1 intel_iommu=on vfio_pci.ids=10de:1b81,10de:10f0"

Yes, I had to specify the vfio ids here as well, couldn’t get vfio to snag the GPU, nouveau always won the battle even when I was using softdep as you will see later on. Also don’t forget to :

sudo update-grub

Then lets edit :

sudo nano /etc/initramfs-tools/modules

Like so:

softdep nouveau pre: vfio vfio_pci
vfio
vfio_iommu_type1
vfio_virqfd
options vfio_pci ids=10de:1b81,10de:10f0
vfio_pci ids=10de:1b81,10de:10f0
vfio_pci
#nouveau

Then :

sudo nano /etc/modules

has to contain :

vfio
vfio_iommu_type1
vfio_pci 

Then make this file :

sudo nano /etc/modprobe.d/nouveau.conf

and paste the following :

softdep nouveau pre: vfio vfio_pci

Then :

sudo nano /etc/modprobe.d/vfio_pci.conf

and add :

options vfio_pci ids=10de:1b81,10de:10f0

Pheeeewww! But wait there’s more :slight_smile:

Okay right now you should really

sudo update-initramfs -u

And at this point if you have your monitor hooked up to the HDMI port of your MB and you reboot you will get a a blank screen upon rebooting. Fret not It should actually be kinda ok. If you check : lspci -nnv |less
you should have vfio-pci as the loaded kernel module for your GPU. If you rebooted and are stuck at POST screen then do the following, if you haven’t rebooted then just continue along.

CTRL-ALT-F2

Then we will force UBUNTU to use Xorg and not Wayland:

sudo nano /etc/gdm3/custom.conf

and uncomment the following line :

WaylandEnable=false

and finally let’s tell Xorg which GPU he should use to display :

sudo nano /etc/X11/xorg.conf

and add the following :

Section "Device"
    Identifier "Intel GPU"
    Driver "modesetting"
    BusID "PCI:0:2:0"
EndSection

Where BusID is the BusID of your IDGPU. (find it using lspci -nnv |less)

That’s it, if you reboot now, everything should be ready to create your VM.

Have Fun!

P.S. : Fun fact : Nvidia drivers check to see if they are running inside a VM or not, if they see they are inside a VM they give you an error and stop working/Installing… anyway :

virsh edit [your vm name]

and make sure to have the followwing inside :

<hyperv>
  <relaxed state='on'/>
  <vapic state='on'/>
  <spinlocks state='on' retries='8191'/>
  <vendor_id state='on' value='ab5485961025'/>
</hyperv>
<kvm>
  <hidden state='on'/>
</kvm>

Peace!

“I haven’t seen a guide (at least a pretty recent one)”

well, that escalated quickly , here you go :smiley:

https://forum.level1techs.com/t/elementary-os-5-0-beta-vfio-pcie-passthrough-guide-tutorial/131420/11

no not for me, but after the [for … do … done] command i got a number of 20 Groups so thats fine too? When its comes down to

im quite unsure what to do, cause im running 2*GeForce gtx 660 and they both got the same id’s + they do not count as “amdgpu” no?
The hole plan for me is to set up an win7 virtualization to use seemless mode (not want to game on it just use some programs like indesign and photoshop).

He said similar, not identical.

Since you are using a different setup, you should have a different number of groups.

Since you are using Nvidia cards, you should be using nvidia or nouveau instead of amdgpu. You can find which you are using with lspci -k

It is possible to use identical GPUs, although it does add another level of setup. The GPUs still have to be in different groups.

This page has some info on using identical GPUs, although it is for Arch and not Ubuntu.
https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Using_identical_guest_and_host_GPUs


QEMU/KVM does not have seamless mode. You have to use VirtualBox for that.

Also, you have to use Seabios and not OVMF if you want to run win7. Win8 and above work with OVMF, win7 does not.

WIth graphics passthrough, unless you use looking-glass, you have to have a second monitor or keep switching display inputs because video comes from the GPU you passed through and not in a window on the host like a normal VM.

I feel like this is the final boss of computing

1 Like

I tried following this guide up to words

“Once all that is done, reboot the system and run lspci -nnv |less”

and I get this error when I try to assign my GPU to vfio-pci driver:

systemd-modules-load[398]: Failed to find module ‘vfio_pci ids=10de:1b06,10de:10ef’
systemd[1]: systemd-modules-load.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: Failed to start Load Kernel Modules.

I am using ubuntu 16.04 with 2 nvidia GPU (750 ti and 1080 ti). I have a proprietary driver installed (version 396).

Here is my ls-iomu.sh just in case:
IOMMU Group 10 03:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:43bb] (rev 02)
IOMMU Group 10 03:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b7] (rev 02)
IOMMU Group 10 03:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b2] (rev 02)
IOMMU Group 10 1d:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b4] (rev 02)
IOMMU Group 10 1d:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b4] (rev 02)
IOMMU Group 10 1d:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b4] (rev 02)
IOMMU Group 10 1e:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
IOMMU Group 10 1f:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 04)
IOMMU Group 10 21:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] [10de:1380] (rev a2)
IOMMU Group 10 21:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fbc] (rev a1)
IOMMU Group 11 22:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1b06] (rev a1)
IOMMU Group 11 22:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10ef] (rev a1)

I also replaced “amdgpu” string in configs with “nvidiafb” string. I am not sure if this is correct value, because lspci -nnv lists several drivers attached to both GPU:

Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_396, nvidia_396_drm

Do I need to install some special package so that it can find vfio_pci module? Or do I need to uninstall proprietary driver?

edit: kernel info:
4.18.19-041819-generic

You should use the kernel driver in use, ie nvidia in your case. Kernel modules lists the compatible modules(I think), while kernel driver is the in-use driver module.

Try removing vfio_pci ids=10de:1b06,10de:10ef but keep options vfio_pci ids=10de:1b06,10de:10ef. Then update grub and initramfs and reboot. Passing the vfio options is one of the areas that distros and distro versions differ.

Thanks, I removed “vfio_pci ids=10de:1b06,10de:10ef” and the error disappeared. But driver in use is still nvidia.
I then changed “softdep nvidiafb pre: vfio vfio_pci” to “softdep nvidia pre: vfio vfio_pci” and that also did not help to pass GPU to vfio-pci driver. It still says:
Kernel driver in use: nvidia

Ok, it seems I forgot to do sudo update-initramfs -u. After this step it lists vfio-pci as a driver in use. Edit - but only for audio device:

22:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1b06] (rev a1) (prog-if 00 [VGA controller])
Subsystem: ASUSTeK Computer Inc. Device [1043:85e5]
Flags: bus master, fast devsel, latency 0, IRQ 64
Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
Memory at c0000000 (64-bit, prefetchable) [size=256M]
Memory at d0000000 (64-bit, prefetchable) [size=32M]
I/O ports at f000 [size=128]
[virtual] Expansion ROM at 000c0000 [disabled] [size=128K]
Capabilities:
Kernel driver in use: nvidia
Kernel modules: nvidiafb, nouveau, nvidia_396, nvidia_396_drm

22:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10ef] (rev a1)
Subsystem: ASUSTeK Computer Inc. Device [1043:85e5]
Flags: bus master, fast devsel, latency 0, IRQ 14
Memory at f7080000 (32-bit, non-prefetchable) [size=16K]
Capabilities:
Kernel driver in use: vfio-pci
Kernel modules: snd_hda_intel

Check both /etc/modules and /etc/initramfs-tools/modules for any typos, ie that you have the correct PCI ID and everything else is correct.

Yes, everything seems correct, but nvidia still overrides vfio-pci driver for gpu 10de:1b06:

Here are both files - https://pastebin.com/JSsRMP2m

Thanks for trying to help. I have managed to fix this problem by replacing this line in my configs:
softdep nvidia pre: vfio vfio_pci

with these 4 lines:
softdep nouveau pre: vfio vfio_pci
softdep nvidia pre: vfio vfio_pci
softdep nvidia_drm pre: vfio vfio_pci
softdep nvidia-* pre: vfio vfio_pci

Now it finally shows:
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau, nvidia_396, nvidia_396_drm
for my “VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1b06] (rev a1) (prog-if 00 [VGA controller])”.

Now I just need to pass this GPU to my virtual machine…

1 Like

Quick and may be stupid question, but where does virt-manager store the XML or shell script containing all the configuration details for the VM? I figured it out with the driver image and have now moved that to my prefered location, but I can’t find the configuration files anywhere.
In Ubuntu 16.04 and the version of qemu that came with that, I could write the scripts myself and store them where I want and run them manually without virt-manager. With the newer version of qemu I have struggled to get my old scripts to work so I have moved back to virt-manager. However virt-manager seems to be a blackbox re the VM config details.

Thanks