Ryzen GPU Passthrough Setup Guide: Fedora 26 + Windows Gaming on Linux | Level One Techs

Yes I am really sure about that.

Having done quite a few setups on Debian and Ubuntu in the past year. The problem is that the kvm option is not present / working in Ubuntus KVM version which is really outdated.

It should be in Debians version (Debian stretch), but that's still not enough to satisfy recent Nvidia drivers which is why you also have to spoof the vendor name as well.

I shared this in hope that some people here find use in that. It took me ages to figure that out and get past error 43. This will be also the step where almost anyone with a GTX card will get stuck.

To my experience there is not a single issue with rebooting VMs as long as you allow unsafe interrupts.

I am running on a i7 4790k with an ACS patched kernel but rebooting a VM and even putting the GPU into a different VM is flawless without rebooting the HV.

It is important that the cards you want to handle are properly bound to vfio after booting the HV. Then you are fine to do anything you have to do.

Based on my limited knowledge of kvm/qemu, the host really doesn't need any pinning of cpu threads. if you have 16 threads total and you pin 8 to one vm and 8 to another vm, then pin 4 to another vm, you can then pin 16 to a final vm. what the host does with that is parses the needs of the vm's and itself and then the threads jump around as "space" becomes available. Space in this context is an available thread to handle the computation that the vm or host requires. That of course is within the confines of which cores you have specified a vm may use. if you pin certain vm's to certain threads or cores you are limiting the reach of your vm's but you could be reducing the cross communication of the ryzen ccx infinity fabric.

If i understand correctly what Wendell was talking about was in fact, that cross communication in ryzen, since you have 2 core complexes of 4 cores and 8 threads, and there is latency involved in cross ccx communication he is suggesting to pin cores 0-3 and 4-7 to specific vm's in order to reduce the latency of those vm's. by extention if you leave the host to its own devices it will just use what ever its wants. If you are running a working daily driver machine as the linux host and then adding in a vm for windows if you pin ccx1 to the vm then the linux host would be more likely to use ccx0 than ccx1 since there would be a load on ccx1 and not on ccx0.

My one caveat is I have not tried to pin a host to certain cores, there may be a way to do so I have never had a need to do that so have never investigated it. The most interesting or possibly the most difficult part of ryzen is how kvm/qemu is going to address this very issue, if i just leave qemu to its own devices at this point in time does it even understand that those 16 threads are spaced over 2 ccx's and have additional latency to cross communicate? Or is qemu just going to bounce your workload to which ever thread is available with out regard to which ccx it is using and thus introduce additional latency into the current near seemless and well executed system that allocates these resources in vm's and the host?

In conclusion, in my opinion based on working with kvm/qemu and vms for a while now, you do not need to allocate any specific cores/threads to the host if you are using it to simply run vm's, it will use what ever cores are available to run its processes and will allocate any specific pinning for vm's that you tell it to. For the most part in my usage of vm's, i leave one core unallocated, so i currently have an eight core xeon, i allocate 7 cores to vm's leaving one core for the host's exclusive use, I honestly doubt it makes a bit of difference, but I have plenty of cores so why not.

2 Likes

I see. So unless you are switching distros for the hypervisor you are out of luck. That's really unfortunate because you then can't use the Hyper-V extentions and Vm awareness which decreases latency quite a bit. :frowning:

@wendell, you should add the affiliate tag to those links :wink:

maybe you could also move your "daily" linux to a VM, and pass your GPU in the same way you pass it to windows. I've never actually done this, but I've been thinking about trying it for some time.

2 Likes

That's actually a cool idea. And while you're at it, you can create multiple "daily" VMs, each for its own purpose, similar to what QubesOS is doing.

That's what I have been doing for one year now.

I have my hypervisor (HV) which runs Debian stretch and KVM. Nothing else. "It's just a stupid HV"

That HV has two GPUs (GTX 970 and GTX 750ti). It boots two (or more) VMs. One for Windows (gaming) and one for Linux (daily driver, also booted upon HV boot).

Both systems are isolated VMs. Pretty neat and no fiddling with Nvidia drivers claiming the Nvidia cards before VFIO :smiley:

Cheers!

For those who wanna do virtualization to game on windows VM but does not want to deal with USB passthrough/hardware switch and monitor switch.
Take a look at this https://parsec.tv/
It is a game streaming software, it can stream video output from your windows VM and output it to your linux host system, it also have input capture and streams it to windows VM.

hoping someone could test it out and share the result :stuck_out_tongue:

2 Likes

Was searching for something like this. You just peek out and send the link to that :open_mouth:

How awesome is that!? I am going to try it !

Yeah, I do that all the time. I have a few VMs that I pass the GPU to. I was talking about switching the GPU between your host and the guest. Some people have done that with the free AMD drivers, so that they can play games in their GNU/Linux host with their card, and also switch it to their guest. GNU/Linux gets switched to a weaker GPU, or integrated graphics, when they turn the VM on.

I also compiled my kernel with the ACS patch, so that I could pass through multiple PCI devices with my motherboard's IOMMU lane setup. I pass through the 1080, a usb3 card, and a PCI gigabit nic.

I want to do this at some point too. I would have the host take my integrated graphics, or another card that works well with completely free drivers, and I would run all nonfree software in my windows and GNU/Linux VMs. Right now the only nonfree software I have in my stretch install are NVIDIA drivers and games. The main thing stopping me from doing this is that my integrated graphics only support three monitors, and I use 4, and that not all of the ports support my 2560x1600 screens. Next time I upgrade my CPU, I will look for something with many PCI lanes, and either integrated graphics that support four screens, or enough PCI lanes to add a third card for the host.

1 Like

Then Threadripper or a 1k US$ Intel i9 will be your CPU of choice :wink:

This may sound like a strange application for passthrough, but has anyone tried it with a Linux guest? Don't see why it wouldn't work, could be a nice way to try out new distros with really good performance.

Yeah :smile: Hopefully I won't be upgrading my CPU for a while. The 4790k does most of what I need it to right now. If AMD releases the source code for their platform security processor I would definitely go with Ryzen though, since that would allow me to have a completely free system.

Writing you from inside one (Linux guest) right now. No magic around this.

My HV is "stupid" and only runs VMs with GPUs passed through. Itself is a headless server :wink:

Rocking the same CPU here, the i7 4790k (@ 4.6-4.7 GHz) is really a beast. :thumbsup:

Only when Windows decides to go rampage updating is when I would love to have a few more cores.

I play a few hours in a weekend tops. I would be interested in PCIe pass through because it is an awesome technology.

However, maybe for my current situation a dual boot (with Linux hibernating) is the best solution. But I will follow these videos closely, this is really interesting.

@wendell are you missing a step after updating vfio.conf in modprobe.d? Other guides point to also updating/creating a vfio.conf in /etc/dracut.conf.d with add_drivers+="vfio vfio_iommu_type1 vfio_pci" and THEN running dracut -f --kver

I'm having issues getting vfio to get a hold of the GPU at boot. I would recommend adding the command "lspci -k" to the guide to help readers verify vfio has control of their GPU.

Guide mentioned which looks newer than when you filmed this video: http://www.laketide.com/setting-up-gpu-passthrough-with-kvm-on-fedora/

hmm, I'll double check. I remember having to do taht in the past but the rd.driver.pre kernel param should grab it as early as possible.

1 Like