Return to

Play games in Windows on Linux! PCI passthrough quick guide



Yup. Both components of the GTX 970 are using vfio-pci.

I checked the iommu groups, and the video card is in group 1 along with one other thing, a skylake pcie controller. That wouldn't be a problem, right?


The GPU and it's audio device have to be the only ones in that IOMMU group or you need the ACS patch.

Search the original post for find /sys/kernel/iommu_groups/ -type l to read more info.


Here's another pastebin with some lspci commands as well as the iommu groups: link redacted

I wonder if somehow the iommu group changed somehow, because like I said it has been working fine for a long time until the update yesterday.


You are going to need the ACS patch. Looks like they updated some kernel modules with more Skylake support would be my guess why that changed.


Well, I am going to have to adapt the instructions and commands to do the ACS patch for my distro and kernel.

Definitely gives me a starting point, though. Thanks.


Although... Does that explain why the VM doesn't run? The IOMMU groups changing explains why the passthrough isn't working, but I removed the video card from the VM (via the GUI) and it still wouldn't boot. Does that also remove the video card from the xml file too? I don't think I see it in the config.


No it doesn't explain that. I would guess that the update also possible broke OVMF since it looks like you had to extract your own nvram. That isn't part of that guide so you have that to enable UEFI for the 970?


I honestly don't remember having to do anything special beyond adding the repo for the OVMF firmware.


Hmm. Another thing to try is to recreate the VM in virtmanager. Link to the same disk.

Sometimes qemu will corrupt the nvram it generates.


I am pretty sure that worked. The VM booted and it looks like windows 10 is 'getting devices ready.' Looks like I am going to have to reinstall drivers possibly. And set up the VM the exact same as the previous one. But it really looks like OVMF got screwed somehow.


That must be it then. I'm no expert, I only know what I've learned and I share what I learn in these videos.

My best guess is that all the adding/switching/updates corrupt the uefi nvram image it uses and you have to recreate it by recreating the VM. I've had this happen a lot from my testing.


And you're doing a great job on that!


Thanks. Only a high school education trying to find my way in the world. I love tech and Linux so I'm trying to learn all I can.

I'm also working on my RHCE. :)

When Wendell rebooted the YouTube channel it seemed like a great time to get serious about mine, and he let me post my videos here which I am forever grateful for.


Thanks a lot. You are definitely on your way. I have been fiddling with Linux for the last decade and have only really started to be serious about it in the last couple years. I definitely don't know much, but I am learning.

Looks like I just have to do the CPU pinning and hide the fact it's a VM from windows and I think I will be right back to where I left off. It would be interesting to find out exactly why it corrupts like that. Sure seems like a poor system if it breaks that easily.

Thanks again.

edit: Also, really good job with the video.

I'm curious, the way you hid the VM from Windows is different from how I have done it. Is there a difference between the two methods? I mean, the way I did it (according to the guide I linked to earlier) works for me. Windows doesn't know it's in the VM and the Code 43 goes away and the Nvidia driver loads. Probably one of those six one way half a dozen the other sorts of things.


You don't need qemu parameter passthrough for kvm=off (and the other arguments probably aren't even needed).
You only need
<hidden state='on'/>
</kvm> ...

<cpu mode='host-passthrough'> ...
for Nvidia GPUs.
Libvirt support these since version 1.2.8 which should be available for Debian derivatives.
For more Infos see


However since 1.2.8 isn't available on Debian stretch yet my guide is still accurate.


I used to not have to do the Hyper-V masking but a recent update made it necessary again. It really doesn't matter, just one simple line to add.

But if your setup works without it then by all means go for it, that's the way I used to do it.


According to Firestrike Benchmarks it is about the same (actually Scores were higher inside VM but within margin of error).
Only Physics sore was low as it seemed to not be able to use Physx, I tested Physx support with other tools though and it did work.
The most difference you can feel is for what CPU resources the Linux uses or if you 'passthrough' cores then the less cores you have for Windows which get reserved to the host system.


"passing" cores to a VM isn't a thing, unless you are referring to CPU pinning which is not "passing" a core through, you are just forcing a virtual core to stay on a the same host thread.

At any rate I have in an upcoming video benchmarks for the performance difference between Windows, Linux, and PCI passthrough. So sit tight and all will be revealed. :)


You can use the OVMF UEFI in Ubuntu as Wendell showed in his intial 'guide' on this Forum a year ago.
I used to get it through
$ wget
$ edk2.git-ovmf-x64-0-20160226.b1536.gd2ba6f4.noarch.rpm
# mv ./usr/share/* /usr/share/ && rm -rd ./usr/share/