Play games in Windows on Linux! PCI passthrough quick guide

Hey guys, I need some help (pretty please?). I might have skipped some steps here and there (I’m using Manjaro), but here’s my problem:
I configured the Windows VM, but when I power it on, my CPU fan starts ramping up, I can see CPU usage in Virt-Manager, but when I switch the input of my TV to my GPU, there’s a black screen. I passed through a GT1030, a PCI-E USB expansion card and the Sata controller. I had Windows installed long before I decided to virtualize it, it’s on a Sata SSD and I have Linux on an M.2 drive (glorious Adata SX8200 Pro that Wendell reviewed).

I’m not sure what I should post, my VM config xml? Or anything else that can help us debug this issue? Booting into Windows works fine. Any ideas?

Edit: I usually Force Off the VM from Virt-Manager, but it seems Windows Event Viewer doesn’t report anything. No boot, no “unclean” shutdown, nothing. It seems my problem lies in booting this, but I have no idea where to start looking. The <disk> part is set to /dev/sda (yes, that is the drive) and just as it is in the guide, but ‘sata’ instead of ‘virtio’. I could post the VM xml if I have too.

Since you are using Manjaro, I would recommend using this guide to setup passthrough

For passthrough, use a guide for the distro or distro family you are using and ideally one for the exact version of your distro. So for Arch based distros, use the Wiki page since it is maintained and up to date.

Read that page, then make a new thread if you have problems.

pasting the xml would be handy, using the summary bit will stop it being super long :slight_smile:

you could alternatively try without using the existing drive, just setting up a storage pool, and seing if you can install a fresh copy of windows (don’t bother with the key yet)
It might be drivers causing an issue.

I followed the arch wiki for the most part. I got stuck at the booting process.

Been looking a little, sorry to have you waiting.

I have no idea how to do this…

<domain type='kvm'>
    <libosinfo:libosinfo xmlns:libosinfo="">
      <libosinfo:os id=""/>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>3</vcpu>
    <type arch='x86_64' machine='pc-q35-4.0'>hvm</type>
    <boot dev='hd'/>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='whatever'/>
      <hidden state='on'/>
    <vmport state='off'/>
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/sda'/>
      <target dev='vdb' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='sdb' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    <interface type='network'>
      <mac address='52:54:00:16:24:b0'/>
      <source network='default'/>
      <model type='e1000e'/>
      <link state='up'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
    <console type='pty'>
      <target type='serial' port='0'/>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <sound model='ich9'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
        <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>

I’d be glad to edit and post the “short bit”, I’m very sorry.

Sorry, was AFK.
I use the first line as:

<domain type='kvm' xmlns:qemu=''>

but you seem to pass it through as a metadata bit. not come across that yet :slight_smile:

The vendor ID seems to work best when 12 characters long for Nvidia GPU’s

1 Like

Thanks. I’ll see what else I can fiddle around with.

After I messed around a little, I discovered I have a problem with OVMF, but I have no idea how to fix it. I can’t change firmware from BIOS to UEFI, which explains why Windows didn’t detect any boot in Event Viewer.

Just FYI, I do have
nvram = [

in /etc/libvirt/qemu.conf at the end of the file. Any ideas?

edit: I also tried:
groupmod -g 78 kvm
groupmod -g 995 kvm
, but that didn’t work, so I revert back to 992 (which it was originally)

I just make a new one, and copy the settings accross :man_shrugging:

Moved to the new category.

1 Like

Am I good to go with the newest form of Debian or should I still use Debian Stretch?

If you use this guide, use stretch. For any other distro or version, find a guide for that distro.

1 Like

No, you can certainly use Buster.

The guide is not dependent on Stretch (Debian 9), it will work on Debian 10.

So I got the VM working in debian 10, but would really like to know if it would be better or faster if I had the VM on a drive that was not the host’s drive. And if so how to do it.

Hi, first off, thank you very much for sharing your experiences with the rest of us, and taking the time for follow-up questions.

I have one!
Let’s say my base OS is a headless Linux distro, and I have 2 VMs: a Linux and a Windows. Could I start the Linux, have it start up with my video card, and then without stopping the VM or shutting down that system, remove the video card from the Linux VM, to have the Windows system start up with it?
And then a follow-up question: could I then snatch the video card from the Windows VM without powering that one down, to then connect it to the Linux VM again?

The reason for my question is because I’m pretty reluctant to sacrifice a screen to connect it to the on-board video card for the headless linux installation (I’m thinking of attaching a separate device with a few buttons, that I could map to the command to switch VMs on the headless Linux - the base of the PC), but I do want to keep some applications running. It’d be silly to install those on both systems, if one can keep running!

Let me know if you have any questions, I’m new to the whole linux world, let alone GPU passthroughs :smiley:

Thanks in advance!

You can write a script for it, check out the archwiki successful passthrough builds. Go into their github and check out their scripts and stuff to get an idea for how you will need to customize it.

Essentially, you just run a script that unlatches your gpu from linux at the booting of your VM

1 Like

Thanks for the quick reply :slight_smile:
To be clear: disconnecting or reconnecting a GPU to a running VM is nothing to fear? :smiley:

Thanks again!

Its been a little late trying to follow (well almost follow) this guide and I came across multiple issues. So here it goes
Followed mostly a guide with the below steps

guide.txt (1.3 KB)

First thing to notice is that he passes iommu=1 in the /etc/default/grub while you dont.

Upon verification i get
IOMMY Ver.txt (408 Bytes)
Does this mean I cant pass through a gpu? (Weird thing is the cpu is intel xeon e5-2667 only the gpu is amd) Or is it just an informational msg?

Also grep -e DMAR -e IOMMU gives me
grep -e DMAR -e IOMMU.txt (1.7 KB)

Upon verification if the amddgpu uses vfio instead of amdgpu driver (here the other guide mentions that only the Kernel driver is important to show up as vfio and not the Kernel module which in me shows amdgpu) with command
lspci -vnn | grep -iP “vga|amdgpu|nvidia|nouveau|vfio-pci” (he uses other command instead of your lspci -k) but the results are the same gives
Kernel_driver module.txt (662 Bytes)

Finally in each reboot I get some error - warnings (also tried one more time the update-initramfs -u)
warnings.txt (22.3 KB)

Why so many missing driv/firm upon running update-initramfs -u
…and whats that PTE read access is not set?
Exactly below that (had to recorded to see it ) has some errors also like
Resuming from hibernation
/dev/sdc2: clean617137/6717440 files, 4938101/26055424 blocks
FAILED Failed to start Load Kernel Modules
[ok]Started Flush Journal to Persistent Storage
[failed]FAILED Failed to start Load Kernel Modules
[Failed] Failed to start - raise network interface (probably this is due to not having the second ethernet connected?- because I have connection from the other lan port)

Of course I didnt continue to the apparmor part because I dont think things are correctly set up
Any thoughts / observetions / solutions?

Thank you in advance

ps1 Forgot to mention that my gpu is in an isolated IOMMU Group and for my cpu the command lscpu | grep Virt returns
VT-x only. In another example i noticed it had in addition the
Viryalization Type = full

ps2 System is a Dell Precision T7600 based on c600/c602 intel chipset and a dual e5-2667 xeon cpu with 64gb ram / 2 gpus - quadro 2000 with legendary nvidia drivers for host running Debian 10 and rx 570 amd for the future Win 10 VM . In bios I enabled everything about Virtualization. Some other options were
Security->TPM Security (Trusted Module Platform)->Disabled
CPU XD Support (The operating system can use this feature to hinder software that exploits buffer overflows)->Enabled
Virtualization =>enabled
VT for Direct I/O->enabled
PCI MMIO Space Size->Large
PCI Bus Configuration->64 Pci Buses (Also has 128 and 256 to choose from)

ps3 @wendell is there a change to give a hint (Thank you)

After command
apt-get install firmware-linux
I think the initial (at least) messages at boot
[3.202732] [drm:amdgpu_pci_probe [amdgpu]] <> amdgpu requires firmware installed
[3.202020] See for information about missing firmware
… went away

Still have the
[0.70263] DMAR: DRHD: handling fault status reg 2
[0.70269] DMAR [DMA Read] Request device [00:1f.2] fault addr 2c703000 [fault reason 06] PTE Read access is not set

Hi everyone.
I do realize that this topic has not been alive for over two months,but I find this to be such a helpful guide and conversation that I created an account to take part in it,and other similar topics I suppose.

So over a year ago this site came up when looking what to do next with Linux. I normally runs Linux on everything from server,to laptop,to the wife`s laptop,to the set top box for digital tv (back when that was a thing even).

The problem was… Things needs windows,and booting into windows for few hours tops a month was not wanted… This was the solution. My server kept on “serving” ,but on the TV windows 10 booting up with its own gpu… When windows 10 needed to reboot,it did,but rest of system is untouched… Windows 10 for some reason seem to want to reboot more than be at use…

Well this was now over a year ago,and I manage everything without issues,and never got to register back then.
Thanks so much for OP ,for this guide both video and on this forum.

This time around I found this tread once more,and this time because I wanted to do it on my laptop… Yes… I know thats not the same,but it seems to me that what I could just dream of being possible back when this guide was made is not so far fetched. In worse case Id need a gpu via thunderbolt I guess. But I`m here now,and I will try to keep updating here of my findings.

The laptop has a GTX 1060 and intel. I was only hoping for a way to dedicate nvidia to a virtual windows (or maybe even the laptops windows partition) and have it go out through hdmi output. Also passthrough mouse/keyboard and such. At the moment I`m looking at this IOMMU groups.

In my laptop I got 3 things in same group as my nvidia gpu. But it seems to make sense… Two of course is Audio and Video (nvidia) ,the last one is “00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)”

For the moment that`s where I am at.

I am sorry if this topic contains more about all of this (my ramblings) far into it,I did read over a year in before writing here,but will of continue to read this and other entries on this forum.

Thanks again OP for all the help this guide has given me.