Ryzen GPU Passthrough Setup Guide: Fedora 26 + Windows Gaming on Linux | Level One Techs

Thank you, i follow the instruction of the link you posted, and now is working, now I ust need to get the NTP working.

1 Like

Sadly, I bought my board before finding this guide. I have an ASUS-PRIME x399-A motherboard. Otherwise I have followed the guide. I have Two Samsung 960 PRO NVMe 512GB drives that show up on separate IOMMU Groups. They have the same device ID.

If there is a way to only pass through one of these, Iā€™d like to use the other for Fedora. If anyone has implemented the shell script that Wendell mentions in the video, perhaps I could adapt it for use on my drives also.

Is this a problem for two identical drives, the same way it is for two identical video cards? I have another drive I can use if this is a no go.

Yeah, you will need an ACS patch to separate 2 of the same drive or GPU. That isnā€™t optimal at all.

Iā€™d exchange one of the 960 Pros for an EVO.

You would also be better using an RX 580 for the host and a Geforce for the VM at the moment because of issues with Nvidia and Wayland (the default display manager in GNOME now)

this isnā€™t really too big of a deal ā€“ do plan to use the other one as the boot drive? if not, then you can get vfio to release one of them and re-assign the driver later in the boot up process.

I think vfio supports the bus ID these days, but worst case, you can do a script on your initrd to bind only to one of the nvme. similar process for duplicate video cards.

Anyone messed with passthrough on MSI B350 Tomahawk board?

Wendell maybe? :stuck_out_tongue:

1 Like

Damn I thought I had checked that. Appreciate it

@wendell Have you done any more testing on the MSI B350 Tomahawk board lately?

not since the agesa updateā€¦ tomahawk was great for non OC/light 8 cores or the 6 core. the pcie slot layout is not optimal for passthrough. Maybe with an apu? but the other slot is through the pch so no passthrough there. And no way to specify primary gpu to be the one through the pch. The asrock ab4 has its other x16 shared with the m.2 so that works pretty well if you dont m.2

1 Like

If youā€™re using a SATA M.2 that technically wouldnā€™t disable the lanes for the PCIe slot, right? But does that actually work in the UEFI?

yep, it has on the boards Iā€™ve tested, havent tested this specific board tho

1 Like

Considering i have 1 set of KVM, how would a kvm switch work? Iā€™ve never used one, and from looking at the device, i wouldnā€™t have 2 monitor inputs, what would i plug in the host/guest slots? Using this video as a parameter, i would only have 1 machine to plug.
I havenā€™t bought the PC for passthrough yet, iā€™m just doing research, so i canā€™t just plug it in and try it out.

Has anyone had difficulties in regards to deleting the Spice display. I tries that on both a Linux & a Windows Guest and neither will load with the passthrough GPU. Now I am in a bit of uncharted territory with e Vega 64 GPU. I was able to get the Windows 10 install to start with spice and then just ā€œdisableā€ that display and set my main monitor as the ā€œMain Displayā€ using the passthrough. But on the Fedora 27 Guest I am even more lost. I did install the proprietary driver for the GPU, but still nothing. At a minimum I am hoping to get either Guest working without the Spice display. Any advice in this area would be greatly appreciated. Thanks, -Bill

Have anyone passtrough a USB controller on the Gigabyte aorus gaming 5, I basically need to use a gamepad and an usb Audio Interface (I tried to assign it to the VM as the keyboard and mouse but it get an error in the Guest W-10)

Latest AGESA has better separation with the peripherals from the chipset. You could try that to passthrough only a USB controller originating from the chipset itself. Still puts PCI-E peripherals all in the same block if it has to go through the chipset, but the USB built-in with the chipset supposedly has better separation with the new AGESA.

I just got passthrough working on Debian, but I had to jump through a couple of issues. After I got it working the VM just would not boot anymore and would stall on the TianoCore boot logo. I switched to Fedora 27, and the issue is the same

This does not happen if I choose i440fx. Only if I choose Q35 the system does not want to boot from the installation media.

Makes me wonder what setting I had before that I was able to install windows and boot and setup the nvidia drivers.

UPDATE

I just ended up using the i440fx platform. Existing passthrough guides confused me, so Iā€™m just using what works for me. A little about the specifications: Iā€™m currently on a B350 PC Mate motherboard. Iā€™m using an old radeon 4850 in the second X4 slot - - I setup X server to make sure it uses that graphics adapter (ati 4850) prior to rebooting after setting vfio-pci in the kernel modules. I have a 1050ti in the primary gen 3 x16 slot.

In order for passthrough to work I had to dump the 1050ti rom file in windows and edit out the nvflash data in the beginning of the rom file. After that it was just a matter of editing the VM XML file to use the rom file. I believe this procedure is only necessary on nvidia 10 series gpuā€™s.

After that, everything is working properly. Way better than dual booting.

Awesome tutorial, really helpful and I love how you actually explain pretty much everything too.

I have three questions though:

  1. I get stuck at the editing the qemu config file, there isnā€™t one created for me. Iā€™ve got as far as further configuring the vm (the window just before installing the os). And Iā€™m kind of stuck there now. I canā€™t create the virtual disk on my linux host as itā€™s not got enough space and I didnā€™t want to create it on the drive I want to actually install the vm onto (sdc), Iā€™ll further explain why in the next point.

  2. I know it wasnā€™t covered in this thread but I want to install the windows vm onto itā€™s own drive but without actually running windows outside the vm. So I read that lvm is the best option as itā€™s the easiest to customise later on. But when I try to use that option I either get an error saying itā€™s already made and needs to force overwrite, that there is no lvm partition or that the pool is already created (I tried partitioning it beforehand, deleting the partition and using the normal full disk option instead of LVM, none seem to work).

  3. How did you manage getting the dual ac1220s working on the gaming 5 mobo? Iā€™ve been searching all day for an answer and canā€™t find anything that works. Iā€™m using fedora 27 kernel 4.14.13. Asla mixer shows that sound is going to the driver and itā€™s not muted anywhere (checked in asla and pavucontrol). If it helps this is what is showing for the device: ā€œdigital output (s/pdif) - HD-audio genericā€ and only the ā€œdigital stereo (IEC958) outputā€ profile, no 5.1 etc. which is what my speakers are. Asla mixer shows it as ac1220 too.

Iā€™d be happy to share any outputs needed to troubleshoot.

Thatā€™s good to hear, though the field reports from ASRock folks trying to update to the latest bioses are far from promising ā€“ more like frightening, considering the number of bricks being reported.

Trying to set up GPU passthrough for the first time on F27 and I canā€™t get vfio-pci to load.

Here is my hardware:

ASRock AB350 Pro4
Ryzen 5 1600
Host: GTX 560, bottom slot
Guest: GTX 1070, top slot

Here is the group I want to pass through:

IOMMU Group 13 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1070] [10de:1b81] (rev a1)
IOMMU Group 13 0b:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)

Here is my /etc/modprobe.d/vfio.conf:

options vfio-pci ids=10de:1b81,10de:10f0

Here is what grub has generated in /boot/efi/EFI/fedora/grub.cfg:

...
linuxefi /vmlinuz-4.14.13-300.fc27.x86_64 root=UUID=7e7e0710-a359-41e3-bac0-ef751ff658a8 ro rd.luks.uuid=luks-a2af3e95-64f5-4716-b7da-94d7cb837daf rd.luks.uuid=luks-a8e10744-5d7c-4e8e-84e0-e54d911899cf rhgb quiet iommu=1 amd_iommu=on rd.driver.pre=vfio-pci
...

After rebooting, both my GPUs are working in Linux as usual. lspci says:

...
01:00.0 VGA compatible controller: NVIDIA Corporation GF114 [GeForce GTX 560] (rev a1) (prog-if 00 [VGA controller])
	...
	Kernel driver in use: nouveau
	Kernel modules: nouveau, nvidia_drm, nvidia
...
0b:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1) (prog-if 00 [VGA controller])
	...
	Kernel driver in use: nouveau
	Kernel modules: nouveau, nvidia_drm, nvidia
...

modinfo says vfio-pci exists:

$ modinfo vfio-pci
filename:       /lib/modules/4.14.13-300.fc27.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz
...

dmesg doesnā€™t say much:

$ dmesg | grep -i vfio
[    0.000000] Command line: BOOT_IMAGE=/vmlinuz-4.14.13-300.fc27.x86_64 root=UUID=7e7e0710-a359-41e3-bac0-ef751ff658a8 ro rd.luks.uuid=luks-a2af3e95-64f5-4716-b7da-94d7cb837daf rd.luks.uuid=luks-a8e10744-5d7c-4e8e-84e0-e54d911899cf rhgb quiet iommu=1 amd_iommu=on rd.driver.pre=vfio-pci
[    0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-4.14.13-300.fc27.x86_64 root=UUID=7e7e0710-a359-41e3-bac0-ef751ff658a8 ro rd.luks.uuid=luks-a2af3e95-64f5-4716-b7da-94d7cb837daf rd.luks.uuid=luks-a8e10744-5d7c-4e8e-84e0-e54d911899cf rhgb quiet iommu=1 amd_iommu=on rd.driver.pre=vfio-pci

lsmod says nothing:

$ lsmod | grep -i vfio

I can load vfio-pci manually with modprobe:

$ modprobe vfio-pci

$ dmesg | grep -i vfio
...
[54916.492435] VFIO - User Level meta-driver version: 0.3
[54916.501072] vfio_pci: add [10de:1b81[ffff:ffff]] class 0x000000/00000000
[54916.501076] vfio_pci: add [10de:10f0[ffff:ffff]] class 0x000000/00000000

$ lsmod | grep -i vfio
vfio_pci               45056  0
vfio_virqfd            16384  1 vfio_pci
vfio_iommu_type1       24576  0
vfio                   28672  2 vfio_iommu_type1,vfio_pci
irqbypass              16384  2 kvm,vfio_pci

ā€¦but both GPUs are still usable in Linux and there is no change in lspci output (I guess vfio-pci must be loaded early in boot for the GPUs to use it :slight_smile:).

What am I missing?

I could be wrong but doesnā€™t the Host gpu have to be in the top slot? The pass-through guest in the second?