Play games in Windows on Linux! PCI passthrough quick guide

Hello all! I am Gray from GrayWolfTech. I make videos on Linux and other interesting technology. Wendell has given me permission to post my written guides here for my videos. This is great for two reasons: First it helps grow this community and second my videos reach a wider audience to share what I have learned. I look forward to posting all my guides and videos here, and I am more than happy to answer any questions about the guide you may have.

– Onward! –

Now it’s time for one of the coolest and more advanced capabilities of Linux. KVM, or kernel virtual machines. We are going to go through step by step, how to setup PCI pass-through to a Windows virtual machine on a Debian host to play games!

Thanks to Redhat, KVM can run virtual machines with almost bare metal performance and supports a wide variety of other neat features. The one we are going to focus on in this video is of course, PCI pass-through. The process of allowing a virtual machine full access to a PCI express graphics card for gaming, CAD, or 3D rendering. With this neat capability, you can run Linux as your host OS, and then pass your GPU (or one of your GPUs if you have multiple) to a virtual machine to play games.

Let’s go over the requirements for this project:

First and most important, you should backup any data you have on your PC. If you know what you are doing then you will not lose anything, but accidentally selecting the wrong drive when installing can lead you to be very sad when you figure out the drive you wiped had all your really good porn on it.
Second, this process will take time. For someone who is an experienced Linux user it should take about 20 minutes (not counting time to install Debian or recompile the kernel). For someone who is new it might take longer. Make sure you allow yourself an entire afternoon or a good chunk of 4-5 hours where you could potentially not have a working PC.
Third, follow the instructions I lay out in the video. I will answer questions down below as I can, but I will not answer anything that was covered in the video, or any problems that arise because you skipped a step you didn’t think was necessary. I have been working on tweaking this guide for over 7 months now, and this is by far the easiest and most functional setup for PCI passthrough available. But it only works if you follow everything in the guide.

Now for the hardware requirements:

  • This will not work at all if your CPU doesn’t support Intel’s VT-D virtualization technology. You can find this out on Intel ARK: http://ark.intel.com/products/80807/Intel-Core-i7-4790K-Processor-8M-Cache-up-to-4_40-GHz?q=4790k.
    Search your CPU and see if it supports VT-D. If it does not then stop the video right here, this guide is of no use to you, you need a better CPU. Or keep watching for science!

  • For AMD as well as Intel users, you can also check this great wiki page for IOMMU support: https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware
    Your motherboard must support IOMMU mapping as well as the same Intel VT-d or AMD equivalent.

  • You need at least 16 GB of ram. Do not try this with less than 12 GB. You need at least 8 for the windows VM and enough for the host to run on.

  • Next, you need a GPU that supports UEFI. If you don’t know this then Google your GPU model and UEFI to see what people say. If your GPU does not support UEFI then you can still use PCI passthrough, but there are a few key differences. First you will need two different discrete GPUs. Second is that later in the installation steps make sure to select BIOS instead of UEFI in Virt-manager.

  • Next, you must have two different GPUs.
    There are two setups possible with this guide:
    Use two discrete GPUs. These GPUs must be different models. They cannot be identical as their PCI hardware identifiers will be the same and the system cannot separate one from the other. An example here would be a GTX 980 and a GTX 970. You cannot use two GTX 980s. There are a few methods if you have two of the same GPU but they do not work very well and thus I don’t recommend them.
    The setup that most people will have is to use the iGPU in your CPU for the host, and then pass your discrete GPU to the VM. For this your CPU must have an integrated GPU. For any of the Intel i-series line they are all included. For AMD just search your CPU model.

  • You must have an extra mouse and keyboard around to pass to the VM during installation.

  • And lastly I recommend having at least two monitors around to make life easier.

First thing to do is reboot into your UEFI’s settings and make sure VT-d and virtualization is enabled. It may be called “virtualization acceleration” or something else. If you are going to use your iGPU for the host make sure your initial display output is set to the iGPU and plug your monitor into the video outputs on the motherboard. Check your motherboard’s manual for more info.

Install Debian Stretch. We need the stretch branch because it has a lot of updates to libvirt and qemu that we need as well as OVMF for UEFI VM support.
Install Debian like normal. I won’t go into making a bootable usb drive or anything, you can take 5 seconds to google that.

Some options I recommend:
Boot the advanced installer. When you get to the user setup you can select whether to use the root user account. I say no to that because I like to use sudo instead.
For the graphical interface, select really whatever you want. I prefer XFCE, but the desktop environment you use really doesn’t matter. For this guide we will use Gnome3.

First thing to do when you are in, open up a terminal. In terminal do $ sudo nano /etc/default/grub

Edit the line GRUB_CMDLINE_LINUX_DEFAULT and add “intel_iommu=on” to the end before the closing quotes. Ctrl+X to save. Then run $ sudo update-grub. Now reboot. We just enabled IOMMU mapping for PCI devices.

After the reboot, in the terminal do $ lspci -nn This lists all your PCI devices and their addresses. Note your GPU’s address. Mine is 01:00.0 and 01:00.1. Also write down the hardware IDs of the GPU. Mine are 10de:13c0 and 10de:0fbb. You will need these later.

Next do $ find /sys/kernel/iommu_groups/ -type l
This lists your IOMMU groups. Find your GPU’s address again. Note the group number. If there are other devices in that same group number then you need the ACS patch. The ACS patch allows you to isolate PCI devices that are in the same IOMMU group.

UPDATE:

Kernel 4.8 added better support for IOMMU group separation. If your GPU’s IOMMU group has another device in it with the address 00:01.0 then you don’t need the ACS patch.

If your GPU is the only device in that group, then go forth and skip this part. We are going to briefly run over recompiling the kernel with the ACS patch.

Recompiling the kernel with the ACS patch:

Install linux-source, libqt4-dev, build-essential, and libssl-dev. Open a file browser, go to /usr/src, and open the linux-source archive. Extract it somewhere.

Download the ACS patch from my google drive here. This patch is based heavily on this one here, and adapted for kernel 4.7+

Open a terminal in the linux source folder you extracted. Do $ patch -p1 < and drag the patch file into the terminal. Then run $ make xconfig Control F and search for KVM. Make sure it is checked. For some fun search for and enable the boot logo. Search for version and change the version string to -acs-patch. Click save in the top and close that window.

Now do $ make -j4 deb-pkg (adjust the number to match the number of cores your CPU has for faster building. I have a 4 core CPU with hyper threading, so there are 8 logical cores).

If your build stops with an error about SSL keys, edit the config file manually with $ nano .config
Ctrl+W to search for CONFIG_SYSTEM_TRUSTED_KEYS and comment out the line.

Save it and re run the build. If it asks you about the certificate filename just hit enter.

This kernel build will take about 25 minutes on a fast quad core CPU. It can take up to 4-5 hours for slower CPUs. So go watch some of my other videos or eat some lunch while it builds.

When it is finishing it will build deb packages of the kernel. The only ones we need are linux-image and linux-headers so when it finishes those you can kill the process.
The deb packages should be one directory up. Remove the extra packages except linux-image and linux-headers. Install the headers and kernel image with $ sudo dpkg -i

Now we can get back to the regular guide.

Install virt-manager. Now install OVMF.

Edit your grub configuration again with $ sudo nano /etc/default/grub
Add to the end of that same line “vfio-pci.ids=10de:13c0,10de:0fbb” but using your GPU’s hardware IDs we identified earlier with the lspci -nn command. If you are using the ACS patch we just created, also add “pcie_acs_override=downstream” Then update-grub again.

Next edit /etc/modules and add
vfio vfio_iommu_type1 vfio_pci vfio_virqfd

What this does it load the PCI-VFIO module. This will read your kernel boot commands, look for those hardware IDs we listed, and then grab the GPU before any other driver can hook into it. VFIO will hold onto your GPU until another module asks for it such as KVM.

Save that file and run $ sudo update-initramfs -u

Now reboot.

If you are booted into the right kernel you will see the tux logo on the boot screen.

Login again and open a terminal. Now run $ lspci -k This will list your PCI devices and what kernel module they are using. Find your GPU and make sure it is using VFIO.

For your windows VM I highly recommend using virtio for the system drivers as it will be much faster than emulating IDE, SATA, or other interfaces. Make sure you grab the latest VIRTIO drivers for windows from Redhat.
Have this ISO in addition to the regular windows installer ISO on your PC.

Now we can get to the fun part.

Open virtmanager and create a new VM. Find your windows ISO file. Next give your VM at least 8 GB of ram. That is the recommended amount for most modern games. I have 32 GB of ram so I will allot 10 GB to the windows VM. Also give it 4 CPU cores.

Next create a disk for the VM. Make sure to use the qcow2 format, and make the capacity at least 40 GB. next make sure to select “customize configuration before install”.

Change the firmware to UEFI and the chipset to Q35. On the CPU page select “copy host cpu configuration”

Change the disk bus to VIRTIO and under performance options select the cache mode as writeback.

Now click add hardware. Add a CD device and link to the virtio drivers iso. Change the dick bus for the CD drives to SATA.

Change the NIC device model to VIRTIO.

Add hardware again and add your GPU as well as the audio device that is part of the GPU.

Remove the spice video, displays, and other stuff we don’t need and change the USB controller to USB 3.

Now plugin your extra keyboard and mouse. Add the extra keyboard and mouse with USB host device.

Enable the boot menu and make sure your installation CD is at the top of the list.

Click begin installation.

If everything is correct, the monitor you have plugged into your GPU will come on and the windows installer will load. If all is working at this point, then force off the VM.

If you have an Nvidia GPU, there is one extra step we need to take. Open a terminal and do sudo virsh edit win8.1 (or the name of your virtual machine)

Delete the first line and replace it with
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

Now go all the way to the botton right before the closing domain tag and add:
<qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,hv_time,kvm=off,hv_vendor_id=null'/> </qemu:commandline>

With that we just told KVM to hide itself from the windows VM as well as spoofed the Hyper-V hardware ID to something random so that the Nvidia driver can’t tell it is in a virtual machine.

Save the file and start the VM up again. Start the windows installer.

Select custom install and then load driver. Browse to the VIRTIO drivers CD, VIOSTOR, and then the appropriate windows version folder. Load the driver.

Your disk will now show up. Click next to start the install.
Depending on your hardware, this will probably be the fastest windows install you have ever seen.

Let it reboot and finish the installer. You will need to install the driver for the network device and others when it boots. Open device manager and load the drivers from the VIRTIO CD.

Audio:

If your audio doesn’t work out of the box, there is a way to fix it.

Edit /etc/libvirt/qemu.conf

Uncomment the line nographics_allow_host_audio = 0 and change the 0 to 1.

If you are using Pulseaudio, edit /etc/pulse/default.pa

Add load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1 to the end.

Edit /etc/pulse/client.conf and add default-server = 127.0.0.1 to the end.

Restart libvirtd and pulseaudio (or reboot) and it should work.

Sidenote here; if you run libvirt as your user instead of root you won’t have to do this)

If you have bad audio, crackling, etc, check out my video on a workaround here:

Then setup windows like you always do. Install your graphics drivers, programs, and play games!

For simplifying the setup, use Synergy to share the mouse and keyboard from your host PC.

88 Likes

Great tutorial. Will probably go through these steps when I get some spare time. I've found passthrough to be pretty tricky.

5 Likes

Bookmarked this to do when I have the time.

3 Likes

It is tricky. This method took me months to perfect, and it works very well. Report back with your experience!

1 Like

Makes me want to upgrade. Need to wait until zen is out first.

2 Likes

Anyone done testing on what kind of performance difference there is in doing this? Or maybe just how different it feels compared to running a video game on a Windows-native host machine? Would absolutely love to run a virtualized Windows setup like this, video acceleration is the only real thing keeping me from switching full-time to Linux.

2 Likes

I don't notice any difference really. The only difference you would have is CPU speed since it is sharing that with the host. The GPU is fully allotted to the VM.

I totally agree as well, video acceleration on Linux is awful. It takes a lot of work and configuring to get rid of screen tearing and make videos smooth.

2 Likes

PC specs: FX8350, MSI 990FX, 16GB RAM, AMD R9 390 (Guest), Nvidia 610 (host).

Hi, I have attempted this on Kubuntu 16.04 and ran into some difficulties with the VM (win10). I was able to successfully passthrough the GPU (R9 390) to the VM and load all the virtio drivers for the ethernet and PCI on the windows 10 VM.

The Windows 10 VM does detect the GPU but whenever I am in the process of installing the AMD graphics driver, the VM goes to a blank screen then reboots after a while, getting stuck on the "your pc has encountered an error" screen . It then restarts after collecting data and gets stuck in this cycle.
I am trying this with the VNC display just for testing. The plan is to run synergy afterwards, well once it's working.

I have tried both Win10 PRO and Home but with the same results. Note I am attempting this without an activation key as I wanted to test if it works properly before purchasing a win10 license. I am not sure if Win10 not being activated is causing a problem.

I have also attempted this with a HP Windows 7 Pro CD but the VM just gets stuck on the starting windows screen. After some research this seems to be a current bug requiring you to set the video to "cirrus" to work. Unfortunately I didn't have any luck with this at all and decided to stick with the Win10 instead.

1 Like

You have to remove any other display devices for Windows to use the GPU. Remove VNC, Cirrus, anything like I said to do in the guide.

Are you using OVMF? My guide is for Debian, Ubuntu does not have OVMF so keep that in mind.

Thanks for the guide, I'm going to be doing this when I get a new GPU, hopefully in the next month or so. Any info on how 10 series work with this? I'm considering a 1080 or 1080ti, depending on price and if it works well.

Any new GPU will work just fine since they all support UEFI.

I know that, but I also know that Nvidia does some shady shit with locking non-quadro gpu's out of VM's. I was more inquiring about if they've upped the ante with the release of the 10 series. If there's nothing there, I'll be in the market soon.

No, the same steps I list in the guide work.

sweet!

1 Like

Great to see these kind of tutorials posted.
Keep up the good work!.

3 Likes

I am using OVMF, was able to get it from the base repos. I am not sure if it was added recently to Kubuntu (16.04).

I actually deleted and then re-did the VM removing all display devices, so that on first boot it displays through my pass through GPU. I was not able to go ahead with the windows install though, as I was unable to assign a USB mouse and keyboard to the VM. I add the USB mouse and keyboard through the add hardware option on virt manager (1.3.2) but nothing happens. I tried doing this while the VM was running and also tried it when the VM was off. I tried adding another USB mouse and keyboard just in case but still no luck. Currently looking into this.

Just to make sure, there weren't any errors when you booted the VM with the mouse and keyboard added? And you are running virt-manager as root?

There were no errors on virt manager when I booted with the mouse and keyboard added. The VM displays shows the "press any key to boot from CD" option then eventually ends up the UEFI shell where I get no response from the added keyboard. At this point the mouse and keyboard still functions in the host OS while the VM is running.

I was actually running virt-manager as the current user (it is part of the libvirtd group). I tried it as root but no difference.

If you are running as root it will ask for your password when it opens.

You are using a second keyboard and mouse for the VM right?

And just to clarify the GPU does output to the monitor?

Yes, I have a wired mouse and keyboard for the VM that was added as two separate USB devices to the VM in virt-manager.

I am using a dual input monitor. The host connection is via VGA (Nvidia 610) and the guest connection is via DVI (R9 390). When I start the VM I get output from the DVI connection so the GPU was passed through successfully.

Do I need to blacklist the USB devices before adding it to the VM?