GPU Passthrough with KVM: Have Your Cake and Eat it Too

Nothing ventured...nothing gained. I'll give it a go tomorrow, thank you.

Success...

Before..

And after...

Which moved my windows experience scores from 5.3 on the cpu to 7.5 now, the only thing lagging is the disk performance which is at 5.9.

Which changed my core utilization from looking like this...

To looking like this...

Thank you again @mythicalcreature I didn't even have to make a new VM, and no BSODs...lol

1 Like

I have. R9 270X works great passed through to Windows, and my nVidia card is adequate for what it does in Linux. All setup in Virt Manager, except for blacklisting the Radeon module and the vfio & pci-stub stuff.

NVIDIA is fine for the host machine it just doesn't play well when passed through.

I know. I found out the hard way.

Just finished benchmarking the windows guest for video performance, here's the results.

Furmark shows the excessive temperature the 270x is getting under 100% load, this is mainly a air low issue since the card is sandwiched between another 270 above it and the PSU below it.

Nothing earthshaking in any of the tests, but it does give me a baseline to work off of to improve the video performance of the guest running in a KVM....but it's not bad for what it's doing in my opinion.

Looks good! Glad it is working out for you.

You're welcome. It is a topic I like to discuss and I've always felt that more people should try it. Thank you for making the thread.




I thought I'd share a new config I tried that I think may have successfully reduced my audio stuttering a bit further. It is always kind of hard to tell with audio though and it seems to perform differently based on system load.

I discussed setting CPU pinning in the VM config and isolcpus in grub, but in addition to that I tried also setting nohz_full on the same CPU cores as the isolcpu cores. This together isolates the cores from the scheduler and kernel timer ticks. Does this make sense for KVM? I'm not sure to be honest. I'm definitely not an expert at this. Just putting together what I find and trying them for a subjective test.

My grub conf now has these entries:

isolcpus=2-5 nohz_full=2-5

KVM conf:

...
<vcpu placement='static'>4</vcpu>
<cputune>
  <vcpupin vcpu='0' cpuset='5'/>
  <vcpupin vcpu='1' cpuset='4'/>
  <vcpupin vcpu='2' cpuset='3'/>
  <vcpupin vcpu='3' cpuset='2'/>
  <emulatorpin cpuset='2-5'/>
</cputune>
...
<cpu mode='host-passthrough'>
  <topology sockets='1' cores='4' threads='1'/>
</cpu>
...

I really should benchmark whenever I make changes to at least see if it is affecting CPU performance.

1 Like

I am definitely planning to do this on my next build.

Well I've given up and bought a fire TV stick on "Prime-Day". I keep wanting to call it a Boom-Stick.
And now I'll try pushing Kodi on that...

Damn! I still haven't worked on the actual topic of this thread.

@lessershoe More people are asking about KVM and I would like to help out with it. The thing is I'm considering grabbing another HIS 7870 or a 270 for KVM. My understanding is that crossfire should work on the SUSE host just not the Windows VM. Correct?

This thread sparked my interest, so I tried my hand at doing a PCI passthrough just to see if I can do it, I'm not much of a gamer.

The guest is Windows 7. NOTE: I am using the default BIOS for the VM, not UEFI.

Managed to do it on a ASUS M5A97 EVO v2 motherboard, bios revision 2501, primary GPU oem Radeon HD6570 (not mine, actually borrowed from work), secondary GPU (passed to the guest) Sapphire 260x (the basic version, not the OC, slightly longer, version). My CPU is an FX 8320 and I have 8GiB of RAM. The distro I used is Manjaro, kernel version 4.1.2.

Everything seems to be working fine so far, did a few runs of the Unigine Valley and Heaven benchmarks to compare with the other numbers online, my results seem ok. Windows experience index is not out of the ordinary, the lowest is my drive at 6.4, and the graphics is at 7.6.

I didn't do it from virt-manager, so it should work on any distro out there (with kernel higher than 4.1, about that in a minute). On the other hand, you do need to use the terminal.

How I did it:
- modified /etc/mkinitcpio.conf to include the modules in the initramfs image. Other distros use different tools for this.

MODULES="pci-stub vfio vfio-iommu_type1 vfio-pci vfio-virqfd"

NOTE: pci-stub is probably useless, it's there from an earlier attempt.

  • regenerated the initramfs.

mkinitcpio -p linux

Here's my kernel command line options:

$ cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-4.1-x86_64 root=UUID=97160449-780a-4ca3-99e3-962f682e14dd rw rootflags=space_cache,autodefrag,relatime amd_iommu=on iommu=pt quiet splash vfio-pci.ids=1002:6658,1002:aac0

EDIT: removed vfio_iommu_type1.allow_unsafe_interrupts=1, and it works just fine without it.

Modify /etc/default/grub if you want those kernel parameters to be there when you run grub-mkconfig (or grub2-mkconfig in other distros). This is run when you install a new kernel image by your package manager.

The start script:

# cat start_win.sh
#!bin/bash
QEMU_ALSA_DAC_BUFFER_SIZE=0 QEMU_ALSA_DAC_PERIOD_SIZE=0 QEMU_AUDIO_DRV=alsa \
qemu-system-x86_64 -enable-kvm -m 6144 -cpu host,kvm=off,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
	-smp 6,sockets=1,cores=6,threads=1 \
	-device virtio-scsi-pci,id=scsi \
	-drive file=/home/user/data/vms/virtio.iso,id=virtiocd,if=none -device ide-cd,bus=ide.1,drive=virtiocd \
	-device vfio-pci,host=07:00.0,x-vga=on \
	-device vfio-pci,host=07:00.1 \
	-drive file=/home/user/data/vms/win7.img,id=disk,format=raw,if=none -device scsi-hd,drive=disk \
	-cdrom /home/user/data/vms/win7.iso \
	-vga none \
        -soundhw ac97 \
        -device virtio-net-pci,netdev=net0 -netdev user,id=net0 \
	-usb -usbdevice host:09da:5fc6 -usbdevice host:09da:0260

If you want to use pci-stub (have a kernel < 4.1) you need to use the vfio bind script from several of the links in this thread.

The last line is me redirecting the mouse and keyboard.

The virtio part is for installing the virtio drivers for the virtual hard drive (better performance, as per Alex' blog). You can also start with a regular hard drive (-hda /path/to/win7/image), create the cdrom with the virtio drivers, boot the machine to install the drivers, than change your hard drive to use the virtio-scsi driver (tried it, it works). If you decide to build the machine with the above configuration, you need to load the drivers from the cd, they are in the folder vioscsi/w7.

Booted the VM a few times (without restarting the host), and it didn't act weird, like people are reporting for the Bonaire GPUs. Maybe I'm just lucky.

My first attempt failed miserably because I messed with AMD overdrive and Windows would get stuck after logging in. I even got a kernel panic triggered by the stack protection. So yeah.

The network connection is really flaky, I'll try to assign virtio drivers to it too.

Also, I keep getting messages like this

[  458.450163] AMD-Vi: Completion-Wait loop timed out
[  459.444907] AMD-Vi: Event logged [IOTLB_INV_TIMEOUT device=07:00.0 address=0x00000002552232b0]

They seem harmless though.

If you're like me and don't have a second gpu to use by the host, and don't want to buy a card just to realize that you cannot do a pci passthrough, you can use the above script to test if pci passthrough works before commiting to buying a new gpu. You can use the address of the primary GPU for the vfio-pci.ids, and after you reboot you will end up in a terminal with an 800x600 resolution. From there you can use the script to boot the VM and test if it works.

If I discover anything interesting or run into issues I'll post here.

UPDATE(s):

Added the sound card, windows wasn't at all happy about adding it after Catalyst installed the drivers for the hdmi audio output. I had to rebuild the vm with the sound card from the beginning (the soundhw sda part). This could have been avoided if I hadn't passed the hdmi audio in the first place (the card with the address 07:01.0), although both the gpu and the gpu hdmi still need to be claimed by vfio-pci, since they are part of the same iommu group. Also, the audio is crap, choppy as hell, but I'll experiment some other time.

I forgot to mention, this is for BIOS based VMs, not UEFI. If you want UEFI, here's a tutorial about how you can do that on Arch based distros: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
It's not mentioned there, but if you want to use UEFI from virt-manager, you need to edit /etc/libvirt/qemu.conf, there's an option at the bottom of the file, you just uncomment it and put the location of the UEFI files.

I added environmental variables for qemu audio, it's just a baseline, I didn't get the chance to tinker with them.

Added some performance options for the guest (from here http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html). I don't know if it's because of them, or just natural statistical variance, but now I get 1433 points in Unigine Valley (default options), from about 1300.

I also added a virtio ethernet controller. You can install the drivers when you install windows, they are found in the folder NetKVM on the virtio iso. It made a difference for me, the browsing feels a lot snappier and not so sluggish.

Managed to get my sound working. All I had to do was change the sound device to ac97, and then I had to install the realtek ac97 drivers inside the guest. As far as I could tell, those are only for windows 7 and earlier, I haven't seen a version for windows 8 and higher (they are shipped by default with the drivers?).

2 Likes

It is discussed a bit here. Sounds like you should avoid it if possible.
http://vfio.blogspot.com/2014/08/vfiovga-faq.html (Question 8)

Maybe it was addressed by this patch if you are running new enough qemu.

Thank you for the tip about unsafe_interrupts, I tried it without, and it works, updated my original post.

Does anyone happen to know if you can do this with a single gpu? What I am thinking is not binding the gpu to vfio-pci at boot, but running a script that unbinds it from the radeon driver and adds it to vfio-pci at runtime, then starts the vm. As far as I know it doesn't work if the host drivers have touched the card, but I am hopping that I am wrong.

My thoughts exactly.

You can take the card away from host, however when the VM shutdowns the host has to reboot to use the card again. Unlike keyboards and mice or things like that where they are expected to be plug n play

You're right, tried it just now. Unplugged the GPU while linux was running, plugged it back in, no image.

tried it myself before so thats how i knew. if you do things like passthrough say a keyboard and mouse, then when the VM shuts down control of them will go back to the host. but with the GPU, little more difficult and requires full reboot to take control again. who knew operating systems didnt like things like pci express devices being unplugged during runtime lawl

1 Like

Does anyone know if this will work for my build? my current pc is http://pcpartpicker.com/user/doppenshloppen/saved/LwQ6Mp plus a radeon 3650 that I would use as the host. Or does my asus mobo screw me over?

According to the manual it does support IOMMU so that is in your favor, as long as it has the proper IVRS table in the BIOS signature you should be fine, be sure to report back and let us know there is also another thread I started awhile back about known working hardware you should add your hardware to the list.

https://forum.teksyndicate.com/t/pci-passthrough-known-working-hardware

The manual for the MB can be found here....

http://dlcdnet.asus.com/pub/ASUS/mb/SocketAM3+/M5A97_R2.0/E8046_M5A97_R2.pdf

If I'm successful I'll definitely post in that thread.