ACS Patch did not separate my GPU into a separate IOMMU Group

I used the ACS patch located here: https://queuecumber.gitlab.io/linux-acs-override/ for kernel 4.19 and put the command pcie_acs_override=downstream in the default grub yet my GPU is still grouped with several other devices.

  • IOMMU Group 12 01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:43d5] (rev 01)
  • IOMMU Group 12 01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] Device [1022:43c8] (rev 01)
  • IOMMU Group 12 01:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43c6] (rev 01)
  • IOMMU Group 12 02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43c7] (rev 01)
  • IOMMU Group 12 02:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43c7] (rev 01)
  • IOMMU Group 12 02:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43c7] (rev 01)
  • IOMMU Group 12 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
  • IOMMU Group 12 05:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii PRO [Radeon R9 290/390] [1002:67b1] (rev 80)
  • IOMMU Group 12 05:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii HDMI Audio [Radeon R9 290/290X / 390/390X] [1002:aac8]

The last two belong to my GPU and are the ones I need to separate.

Did you update grub and reboot?

Also, can you post the output of-
uname -r and
sudo cat /etc/default/grub

1 Like

Yes I did.

4.19.0-041900-generic

GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=lsb_release -i -s 2> /dev/null || echo Debian
GRUB_CMDLINE_LINUX_DEFAULT=“quiet splash iommu=1 amd_iommu=on iommu=pt pcie_acs_override=downstream”
GRUB_CMDLINE_LINUX=""

If you are wondering about my grub config I have iommu=pt there because without it I get a kernel panic. Could this be causing issues?

WHOOPS, I just realized my obvious mistake, I didn’t boot from the patched kernel.

I booted from the patched kernel and it still isn’t seperating my GPU from everything else.

uname -r
4.19.0-acso

sudo cat /etc/default/grub

GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=lsb_release -i -s 2> /dev/null || echo Debian
GRUB_CMDLINE_LINUX_DEFAULT=“quiet splash iommu=1 amd_iommu=on iommu=pt pcie_acs_override=downstream”
GRUB_CMDLINE_LINUX=""

My GPU is still grouped with the same devices as before I patched the kernel.

You could try pcie_acs_override=multifunction

Otherwise, do you have another available slot to try the GPU in? You could switch it with the host GPU, and if possible switch it in the bios/uefi so you actually still boot off of the correct GPU.

If moving slots does not work, then you are stuck till you get a different motherboard.

Unfortunately my bios has no option to boot off the GPU in my second PCI slot, it just defaults to the one closest to the CPU. I actually tried a setup like that a couple of weeks ago and was basically told that no bios has the option to switch between GPUs in the PCI slot as the primary display and that they only have to option to switch between devices in the PCI slots and the on board GPU (which my mobo doesn’t have). I don’t know how true this is but it’s what I was told. I can completely switch my GPUs around the two PCI slots I have but I’m geussing I’m just going to run into the same problem trying to passthrough the GPU in the second slot again?

I do have an PCI X4 slot in the middle of my two X16 slots but I’d have to get an adapter to fix my GPU in there.

Yep.

Is sometimes possible to pass through the boot GPU but it adds another layer of config and complexity.

Edit- For setting a manual vBIOS rom, unless you can find one that matches your card exactly, dump it yourself. It is not very hard to do, well as compared to the rest of settting up passthrough.

Wow, thank you. I wish I would have known this before. I got a whole new case to fit two full GPUs in it because before I had to use a single slot cheap GT 710 in my second slot because that was the only thing that could fit and I wanted to passthrough my R9 390 in my main PCI slot. Using this mATX mobo for GPU passthrough so far has been a pain in the ass.

1 Like

So replacing pcie_acs_override=downstream with pcie_acs_override=multifunction helps but my GPU is still grouped with 1 other device.

*IOMMU Group 17 02:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43c7] (rev 01)
*IOMMU Group 17 05:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii PRO [Radeon R9 290/390] [1002:67b1] (rev 80)
*IOMMU Group 17 05:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Hawaii HDMI Audio [Radeon R9 290/290X / 390/390X] [1002:aac8]

Is there something else you can suggest or could I try passing through the GPU with it’s grouping like this?

Yay, it should work. I think the PCI bridge means that the lower slot with your Hawaii GPU is going through the chipset and not directly to the CPU.

Bind(to vfio-pci) and passthrough the GPU and GPU audio but not the PCI bridge.
Sources-


https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Plugging_your_guest_GPU_in_an_unisolated_CPU-based_PCIe_slot


I must ask, what is your MB and CPU?
From what little is posted i would guess AMD Zen based setup and you have your GPU in a chipset attached slot.
In that case i would set pcie_acs_override=id:1022:43c6.
This should be enough to separate chipsets connected devices and pcie slots to different iommu groups.
Using downstream or multifunction is the last resort.
Still I would put the GPU into the CPU connected pcie slot.
And if you have only one try to disable CSMT in your bios, i saw several boards that changed its behaviour to boot from chipset connected GPU.
You will need EFI enabled GPU for this.

My Mobo is a Tuff B450M-PLUS and my CPU is a Ryzen 1800x. The two GPU I have in my system are a GTX 760 for the host OS and a R9 390 for the guest. I will try the grub setting you posted. I managed to get the GPU passed through using ‘pcie_acs_override=multifunction’ but QEMU using virt-manager refuses to boot into my Windows 10 iso. I’m pretty sure it’s because I have CSM enabled in the bios but unfortunately if I turn that setting off my bios defaults to the R9 390 and Ubuntu tries to boot off of it. Pretty sure my R9 390 is EFI capable and that’s the reason my BIOS defaults to it with CSM off, or at least that’s what I’m guessing. I guess I could just use the R9 390 as the host and the GTX 760 as the guest but that really wouldn’t be preferable because I want the stronger card on the Windows VM.

As I suspected. Your R9 390 is connected to chipset which is only PCIe 2.0 x4.
You could change the cards and disable CSM. For whatever reason, for Zen, if you disable CSM the boot GPU is the chipset one. My guess is because it has lover PCIe address. If your GTX 760 has EFI bios system will be booted from this card.
Than you would not need ACS patch at all.

I would also try pcie_acs_override=id:1022:43c6 instead of multifunction. And pass both GPU and its HDMI audio card connected via pcie-root-port or ioh3420.

1 Like

I switched around my GPUs disabled CSM and now my iommu groupings are perfect. My GPU is separated from every other device in my system even without the ACS patch. I am still having issues with booting into my Windows 10 iso though. QEMU will reset the VM everytime it tries to boot into the Windows installation. I thought this was because CSM was enabled but it looks like that isn’t the case. I think I am going to make another thread about this problem.