Ryzen GPU Passthrough Setup Guide: Fedora 26 + Windows Gaming on Linux | Level One Techs

It might just be me so don't worry too much about it. My setup is pretty unique. I was able to get it working with Debian without too much trouble, but now I'm trying on Fedora. i5-4690K, ITX mobo, GPU to Win 10, iGPU to Linux, mobo's single USB controller given to Win 10. Win 10 hosts Synergy and Linux is Synergy client (that way no latency while gaming).

2 Likes

A year or so ago I did this on an Intel machine. I had a windows VM for gaming and a ubuntu VM for cuda/compute. Worked fine on my Fedora host machine.

ya gonna make me spend some money on a ryzen system

1 Like

Easiest way around having two identical graphics cards is to cross flash one of the cards with the bios from a different model card that has the same GPU.

For example, If you have two 1080 Gaming X cards, flash one with the bios from the gaming Z model. The bios update changes the device ID and as far as the motherboard is concerned they are different card types

2 Likes

I think that it its pretty safe to assume that the Aorus X370 Gaming K7 would also be supported.
Since that board is pretty much the same as the Gaming 5, just with some additional feutures.

1 Like

Currently writing to you from a system (i7 4790k) with two identical GPUs (GTX 970 Gigabyte G1). To my surprise regardless of the fact that the device ids are identical things work perfectly fine for me.

HV: Debian Stretch, Linux 4.12.0-rc5-acs-patch (ACS patched kernel)

lspci -nnn:

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
02:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)
02:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)

Not sure if the issues with two identical GPUs are related to the RYZEN platform / chipset but I cannot spot any issues here. Both GPUs work at the same time in different VMs and also work simultaneously if assigned to the same VM.

Cheers :wink:

I think the problem that people usually have with identical cards is attaching different drivers to each card at boot. In your case, it sounds like vfio-pci is being assigned to both cards at boot. The problem would come if you want to have one attached to the host and one to the guest. It is still possible, but a bit trickier.

Heh yeah I work around this by using my HV as a well .. HV only :wink:

So all GPUs are bound to VFIO which makes things way easier.

The discussion is relative to Ryzen platform specific issues. Not Linux/IOMMU in general that works fine on Intel platforms

I have working setup with GPUs with identical IDs.
I simply bind both to vfio_pci and then unbind the one for host before its module loads:
$ cat /etc/modprobe.d/amdgpu.conf
# Load AMDGPU for R9 290
install amdgpu echo 0000:29:00.0 > /sys/bus/pci/devices/0000:29:00.0/driver/unbind; /sbin/modprobe -i amdgpu

Works for amdgpu and radeon. I dont have the hardware to test NVidia or novuea.

One more note.
If guest experiences microfreezes, following resolved it for me:
<cpu ...>
...
<feature policy='disable' name='smep'/>
</cpu>

Had horrible experience, especially in VR, until I had disabled the smep.
Wendell has it also disabled in the video.

2 Likes

Can you explain the micro freezes a bit more? When did they occur? For how long?

Sometimes the whole guest would be unresponsive for few miliseconds.
It was most noticeable during mouse movement. The mouse would stop then resume.
In games whole screen would freeze.

Also there was choppiness during VR when I looked or moved around.
This was consistent and not occasional.
How noticeable it was dependent on game and gpu load.
Also when I watched the graph on steamVR I could see every few frames dropped with same frequency on some games.
This was gone as soon as I disabled smep.

this is what I did with the FE 1080 and it seemed to work. echoing into /driver/unbind

Has someone tried to run OSX on a Linux box and passtrough a GPU, so you could run finalcut-pro. I know there are alternatives for it on Linux, but it is just my favorite video edeting software and it would be really nice to get it running on a Linux system.

Trying to do this with an ASRock x79 setup. Extreme-4M (which actually supports ECC ram and a Xeon xD). Need to see about the iommu groups though. Might not like it too much.

I know I know, I am old x79 and all. Just figured, if it ain't broke don't replace it xD

Would be using Arch Linux - because well, it is just nice to have all the latest pkgs.

Anyone have any experience with x79 builds for this? I was thinking of using an old 8800GT single slot card for the host OS and a GTX 970 for the Guest.

I have tried this. To get it working you need to compile the virtual UEFI from source. I had some problem compiling it, and I have not gotten back to it yet, but I will soon. This link may be helpful:
https://www.contrib.andrew.cmu.edu/~somlo/OSXKVM/

I have a hackintosh partition on my system that I just used to try to get the VM working, and it supports all of the devices that I am going to pass through to it. Once I get the UEFI working, it should work similarly to my windows guests. I don't have much use for an OSX VM, but I just want to set it up to see if I can.

4 Likes

To be honest I'm a bit lost as to why GPU pass through is so important. I run Linux...just Linux...no VMs, no wine, no anything...just Linux. 74% of my steam library runs natively...that's 102/138 games. So why...why do we need GPU pass through? I don't understand.

Because there are some games that people enjoy that will not run well in wine or Linux. Also some applications will not and do not run well in wine or Linux. So for the people that want the best of both worlds this is the answer.

I see that as a detriment to the Linux community. We have a chicken and the egg problem. Devs don't bring software to Linux because there's not enough users and users don't use it because there's not enough software. This only contributes to that as any software run in the virtualized environment won't know it's in a virtualized environment and even if it does it's not like it hurts the pocket books of the devs that made it so they won't care.

Hey man I'm super late to reply to this but I'm from Parsec.

You might need a HDMI dongle if you're wanting to run GTX cards headless (no monitor), but there may be a chance that AMD consumer cards allow "virtual monitors". For example I have built a Server 2016 Hypervisor with a Quadro M2000 passed through to a Server VM, I use that with Parsec headless as with the Quadro driver you can assign EDIDs to physical ports on the GPU (our app works by basically taking a copy of the front buffer so needs a "display"). Windows doesn't do an CPUID masking either so in order to do working pass-through with an NVIDIA GPU you need a recent Quadro card otherwise you get error 43. The cool thing is though because you can basically make up any EDID, you can set 120Hz or higher and stream at 120hz from your remote PC.

I'd be so keen to hear about anyone that's doing Linux based GPU pass through with Parsec - I need to give it a go myself so I might try it myself...but just have to find the time.

1 Like