Am I screwed if both my GPUs are in the same IOMMU group? (Aorus X570 Master)

A couple of months ago I bought a new system based on Aorus X570 Master and a Ryzen 3950X with the intention of gaming on a Window VM using PCI passthrough.

Today I finally acquired a second GPU and immediately installed it, and right away my progress has come to a screeching halt. Using this script from the Arch PCI passthrough guide, I got this output (to my horror):

IOMMU Group 2:
	00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
	00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
	00:03.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
	0c:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] [10de:1c02] (rev a1)
	0c:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
	0d:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Upstream Port of PCI Express Switch [1002:1478] (rev c1)
	0e:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch [1002:1479]
	0f:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT] [1002:731f] (rev c1)

NOOOOO

Both my video cards are in the same IOMMU group! How can this be! I remember this motherboard got Wendell’s green light, partly because of good IOMMU support.

I am using the topmost and second-topmost PCI slots. There is simply no room in my case to use the bottom one, so if that turns out to be the only solution I am thoroughly f***ed:

I am using the F11 BIOS, which is currently the most recent version.

Should I just throw away one video card, let my dreams die and go back to dual booting to play Windows only games? Or is there any hope?


Full output of the IOMMU script if it interests anybody...
➜ bash show-iommu-groups.sh 
IOMMU Group 0:
	00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
	00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
	00:01.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
	01:00.0 Non-Volatile memory controller [0108]: Phison Electronics Corporation E12 NVMe Controller [1987:5012] (rev 01)
	02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream [1022:57ad]
	03:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a3]
	03:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a3]
	03:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a3]
	03:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a3]
	03:05.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a3]
	03:08.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a4]
	03:09.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a4]
	03:0a.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a4]
	04:00.0 Non-Volatile memory controller [0108]: Phison Electronics Corporation E12 NVMe Controller [1987:5012] (rev 01)
	05:00.0 Non-Volatile memory controller [0108]: Phison Electronics Corporation E12 NVMe Controller [1987:5012] (rev 01)
	06:00.0 Network controller [0280]: Intel Corporation Wi-Fi 6 AX200 [8086:2723] (rev 1a)
	07:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
	08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller [10ec:8125] (rev 01)
	09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485]
	09:00.1 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
	09:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
	0a:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
	0b:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 1:
	00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 10:
	00:08.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU Group 11:
	00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 61)
	00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU Group 12:
	00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 [1022:1440]
	00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 [1022:1441]
	00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 [1022:1442]
	00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 [1022:1443]
	00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 [1022:1444]
	00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 [1022:1445]
	00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 [1022:1446]
	00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 [1022:1447]
IOMMU Group 13:
	10:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function [1022:148a]
IOMMU Group 14:
	11:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485]
IOMMU Group 15:
	11:00.1 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP [1022:1486]
IOMMU Group 16:
	11:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
IOMMU Group 17:
	11:00.4 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller [1022:1487]
IOMMU Group 18:
	12:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 19:
	13:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 2:
	00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
	00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
	00:03.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
	0c:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] [10de:1c02] (rev a1)
	0c:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
	0d:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Upstream Port of PCI Express Switch [1002:1478] (rev c1)
	0e:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch [1002:1479]
	0f:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT] [1002:731f] (rev c1)
	0f:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 HDMI Audio [1002:ab38]
IOMMU Group 3:
	00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 4:
	00:05.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 5:
	00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 6:
	00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU Group 7:
	00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 8:
	00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU Group 9:
	00:08.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
1 Like

I’m beginning to think that manufacturers watch Wendell’s recommendations and actively sabotage the products with shitty firmware updates :wink:

2 Likes

No, you’re not screwed. You need the acs override patch and you can tell the kernel to ignore the acs rules. There are consequences to this, but given your gpus, you won’t experience any of them.

2 Likes

Please enable these options in BIOS and report back with your full IOMMU groups list

SVM -> Enable
iommu -> Enable
ACS Enable -> Enable
Enable AER cap -> Enable
Alternative routing -> Enable

Not sure whether the Alternative Routing one is necessary. Please try enable/disable it and see if there is any difference.

Sources:

https://www.valo.at/2020/01/23/amd-ryzen-vfio-gpu-passthrough-gaming-vm-on-new-hardware/

https://www.overclock.net/forum/11-amd-motherboards/1728360-gigabyte-x570-aorus-owners-thread-652.html#post28351228

https://forum.level1techs.com/t/gigabyte-x570-aorus-elite-iommu-groups/144937/36

https://forums.unraid.net/topic/85374-gigabyte-x570-aorus-elite-pro-wifi-ultra-tips-tricks/?do=findComment&comment=795755

4 Likes

Holy shit, thanks guys! That’s a big releaf.

Will do ASAP!

1 Like

Yeah, sometimes playing around with bios settings related to PCIe configuration/layout can change things.

I’m not familiar with that board in particular, so I figured I’d just allay your concerns.

At the end of the day, lots of board manufacturers are getting onboard with proper IOMMU configuration.

In the future, if you’re not sure, sometimes just taking a stroll through all the UEFI config settings can cause something to jump out as “oh, this might be relevant” and you can just flip it and see what happens.

There aren’t really any UEFI switches that you can’t go back from, so don’t be afraid to monkey around with it, especially since they have user settings profiles on most boards now.

Was about to post this, have an aorus x570 ultra with R9 3900X and had to enable this in the bios. Splits more or less every single pci-e device to it’s own iommu group.

1 Like

YES! Dude I love you!

I thought I would enable these options one by one and check the difference. SVM and IOMMU I already had enabled, so I enabled ACS and immediately my IOMMU group count exploded. I now have 44 separate IOMMU groups rather than 20. Even the audio card and GPU of my Radeon video card is in 2 separate IOMMU groups :laughing:

But most importantly:

IOMMU Group 32:
	0c:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] [10de:1c02] (rev a1)
	0c:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)

I did not touch the “Enable AER cap” or “Alternative routing” options, for the record.

Here is the new full IOMMU output for those who are interested
IOMMU Group 0:
	00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 1:
	00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU Group 10:
	00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU Group 11:
	00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 12:
	00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU Group 13:
	00:08.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU Group 14:
	00:08.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU Group 15:
	00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 61)
	00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU Group 16:
	00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 [1022:1440]
	00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 [1022:1441]
	00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 [1022:1442]
	00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 [1022:1443]
	00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 [1022:1444]
	00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 [1022:1445]
	00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 [1022:1446]
	00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 [1022:1447]
IOMMU Group 17:
	01:00.0 Non-Volatile memory controller [0108]: Phison Electronics Corporation E12 NVMe Controller [1987:5012] (rev 01)
IOMMU Group 18:
	02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream [1022:57ad]
IOMMU Group 19:
	03:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a3]
IOMMU Group 2:
	00:01.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU Group 20:
	03:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a3]
IOMMU Group 21:
	03:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a3]
IOMMU Group 22:
	03:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a3]
IOMMU Group 23:
	03:05.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a3]
IOMMU Group 24:
	03:08.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a4]
	09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485]
	09:00.1 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
	09:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
IOMMU Group 25:
	03:09.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a4]
	0a:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 26:
	03:0a.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge [1022:57a4]
	0b:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 27:
	04:00.0 Non-Volatile memory controller [0108]: Phison Electronics Corporation E12 NVMe Controller [1987:5012] (rev 01)
IOMMU Group 28:
	05:00.0 Non-Volatile memory controller [0108]: Phison Electronics Corporation E12 NVMe Controller [1987:5012] (rev 01)
IOMMU Group 29:
	06:00.0 Network controller [0280]: Intel Corporation Wi-Fi 6 AX200 [8086:2723] (rev 1a)
IOMMU Group 3:
	00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 30:
	07:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 31:
	08:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller [10ec:8125] (rev 01)
IOMMU Group 32:
	0c:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] [10de:1c02] (rev a1)
	0c:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)
IOMMU Group 33:
	0d:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Upstream Port of PCI Express Switch [1002:1478] (rev c1)
IOMMU Group 34:
	0e:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 XL Downstream Port of PCI Express Switch [1002:1479]
IOMMU Group 35:
	0f:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 [Radeon RX 5600 OEM/5600 XT / 5700/5700 XT] [1002:731f] (rev c1)
IOMMU Group 36:
	0f:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Navi 10 HDMI Audio [1002:ab38]
IOMMU Group 37:
	10:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function [1022:148a]
IOMMU Group 38:
	11:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485]
IOMMU Group 39:
	11:00.1 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP [1022:1486]
IOMMU Group 4:
	00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 40:
	11:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
IOMMU Group 41:
	11:00.4 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller [1022:1487]
IOMMU Group 42:
	12:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 43:
	13:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 5:
	00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU Group 6:
	00:03.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU Group 7:
	00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 8:
	00:05.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU Group 9:
	00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]

While I was at it I decided to do a little science. I found the AER Cap and Alternate Routing options. (The latter was tricky because the option is called “PCIe ARI Support”, and “Alternate Routing” isn’t mentioned unless you read the option description.)

Both these options were set to “Auto” by default. I tried setting both options to “Enabled” and “Disabled” separately, and I can report that neither made any difference to the show-iommu.sh script output. (The output had the same MD5 sum after every change.)

Agreed! Haha! Soon we’ll be like, wow, Remember the good ole days. You know, back when we could talk about solutions and not slammed into a corpo-box on Level1!

1 Like

Glad to see it worked for you. Please do come back and let us know how your VFIO setup goes. There were some reports with problems passing through the USB controller so I’m curious how will it work for you.

2 Likes

Also thanks for doing the “PCIe ARI Support” experiment!

1 Like

Also it appears that even with the new AGESA 1.0.0.5 that you will need to still use the FLR patch to pass through the audio and USB properly. See the thread I made in the VFIO section. Make sure to go ahead and compile that into your kernel for a proper passthrough experience on ryzen 3000.

I sat the rest of the day tinkering with it, and luckily it turned out really well!

The biggest issue I had was that games were stuttering quite a bit, even though they had a high average FPS. This turned out to be because of the frequency governor on my CPU. By default it is set to “schedutil”, which apparently doesn’t respond at all to CPU activity inside the VM. I tested this by watching the CPU frequency while running prime95 inside the VM, and it wasn’t affected whatsoever. However when I ran stress -c 4 on the host, the frequency shot up to max.

Setting the “performance” governor on all my cores like this:

sudo cpupower frequency-set -g performance

Completely resolved the issue, and games now perform indistinguishably from the PC I took the GPU from :grin: Unfortunately that GPU was a GTX 1060 so it doesn’t really deliver the frame rates I want. For that reason I’ll probably keep dual booting to play games using the host Radeon RX 5700 XT card until the GTX 3080 Ti is released. (This whole endeavor has really been to prepare my PC for that.)

Another issue I had was forwarding my mouse and keyboard to the VM. I tried using the Evdev method (as described here), which worked pretty much flawlessly for desktop use, and it’s really handy to be able to pass the mouse and keyboard back and worth using a keyboard shortcut. However, as soon as I jumped ingame in Apex I noticed that any fast flicks of the mouse would be completely borked, and my crosshair would first go in the direction I was pulling the mouse, then it would jump partway back towards where it started. This is the kind of weird behavior I would expect if I was forwarding the mouse using something like Synergy.

I also had an issue with forwarding audio from the VM to PulseAudio (as described here). It worked flawlessly, the audio quality was perfect as far as I could tell and there was no stuttering or clackling, but unfortunately the delay was simply too high for games. I would estimate around 200ms delay compared to using the speakers on the monitor through HDMI directly. It’s possible this could be resolved with some configuration, I’ll probably look into that in the future.

For now, I’ve worked around both of the above issues by simply forwarding the USB devices directly to the VM (as described here). It turns out this is really simple and quite convenient. I run a script like this right after booting the VM:

#!/bin/bash

virsh attach-device wintendo /home/tomas/.VFIOinput/steelseries_siberia_1.xml
virsh attach-device wintendo /home/tomas/.VFIOinput/steelseries_siberia_2.xml
virsh attach-device wintendo /home/tomas/.VFIOinput/corsair_keyboard.xml
virsh attach-device wintendo /home/tomas/.VFIOinput/logitech_g900_mouse.xml

I have separate keyboards for gaming and working (writing code) so I can safely forward the gaming keyboard and the mouse to the VM without losing access to the host.

When I power off the VM, all the devices are automatically returned to the host.

I also tested out looking glass, which I was really exited about. I tried both the stable version and the bleeding edge version, but unfortunately the performance was nowhere near good enough for competitive games. The game’s frame rate was around 100 FPS, and the frame rate of the looking glass client was several hundred FPS, but the “UPS” counter in looking glass never managed to reach even 60 while I was ingame in Apex. (UPS is the number of frames captured and transferred to the host per second. The experience otherwise was extremely awesome and polished though. If I was playing an RTS game where frame rate wasn’t important I would definitely be using it rather than switching monitor outputs.

I plan on writing an “initiate game mode” script next weekend, which will:

  • Launch the wintendo VM
  • Forward the appropriate peripherals
  • Set the performance governor on the CPU cores I’ve pinned the VM on
  • Detach the gaming monitor from the host using xrandr so it can be dedicated to the VM (I have 2 monitors)

Then I’ll write a corresponding script that undoes all of the above and shuts down the VM. Hopefully this will all end up being more convenient than dual booting :grin:

I also want to find a performant way of binding a host directory to the client, so I can record video from the client directly to the host using OBS. Hopefully libvirtd already includes a way to do this that doesn’t involve installing software on both machines and using the network to sync. I haven’t even started looking into this though, I would appreciate any pointers.

The only other noteworthy thing I discovered that I can think of right now is that you can use logical volumes (LVM) as storage for VMs. This is what I’ve done for the Wintendo VM:

➜ sudo lvs
  LV                           VG   Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home                         main -wi-ao---- 1000.00g                                                    
  system                       main owi-aos---  100.00g                                                    
  system-snapshot-before-iommu main swi-a-s---  100.00g      system 1.38                                   
  wintendo                     main -wi-a-----  256.00g

Libvirt config:

<disk type="block" device="disk">
  <driver name="qemu" type="raw" cache="none" io="native"/>
  <source dev="/dev/main/wintendo"/>
  <target dev="sda" bus="sata"/>
  <address type="drive" controller="0" bus="0" target="0" unit="0"/>
</disk>

It’s currently attached to the VM as a SATA Disk, which I’ve read is suboptimal for performance, but it didn’t have a noticeable impact on my game loading times. I haven’t done any disk benchmarks inside the VM yet though.

The alternative is using a Virtio disk, which should be significantly faster, but according to this blog post it tends to break when running Windows update, so I’m in no hurry to switch away from the SATA approach.

I am currently using Virtio for the VM’s NIC though, which appears to be working perfectly.

I haven’t really used the VM for more than a couple of hours of gaming so far though, so the opinions I’ve expressed in this post could definitely change :slight_smile:

Edit: Played Apex in the VM for about 4 hours straight tonight, absolutely no issues! Turned my resolution down to 1280x1024 so this puny GPU can handle more than 100 fps :slightly_smiling_face: So far it’s indistinguishable from playing on bare metal.

Could you elaborate on this a bit? I’m forwarding USB devices to the VM using virsh attach-device ... and from the hour or two I tested it it appeared to work perfectly. Should I expect any issues?

Edit: Never mind, found your post. Apparently this just applies to PCI passthrough of USB controllers, which I’m currently not doing. I’m curious what the advantage of passing a whole controller to the VM is over just forwarding the USB devices one by one. Latency?

Also I saw you recommended using a specific scheduler when compiling the kernel, but I couldn’t find an explanation why. Could you elaborate?

For anybody interested, here is the initiate-game-mode script I’m currently using:

#!/bin/bash

main() {
    sudo -v

    # ckb-next needs to die, otherwise the Corsair keyboard won't be forwarded
    killall ckb-next

    echo
    echo "Setting performance governor on all CPU cores"
    echo

    sudo cpupower frequency-set -g performance
    sudo virsh start wintendo

    echo
    echo "Attaching USB devices"
    echo

    sudo /home/tomas/.vfio-devices/devices.sh attach

    echo
    echo "GAME MODE INITIATED!"
    echo

    while true; do
        if [[ "$(sudo virsh domstate wintendo)" == "shut off" ]]; then
            cleanup
            exit 0
        fi

        sleep 3
    done
}

cleanup() {
    echo "Game mode disengaged, cleaning up..."

    echo
    echo "Restoring schedutil governor on all CPU cores"
    echo

    sudo cpupower frequency-set -g schedutil

    gtk-launch ckb-next >/dev/null 2>&1

    echo
    echo "Bye!"
}

main

Basically it sets the performance governor on all CPU cores, launches the Wintendo VM and forwards all my USB peripherals. When the VM shuts down it undoes all the startup steps.

Here is .vfio-devices/devices.sh:

#!/bin/bash

sudo -v

if [[ $1 == "attach" ]]; then
    find /home/tomas/.vfio-devices/ -iname "*.xml" -exec \
        sudo virsh attach-device wintendo {} \;
else
    find /home/tomas/.vfio-devices/ -iname "*.xml" -exec \
        sudo virsh detach-device wintendo {} \;
fi

And here is an example of a peripheral XML file (.vfio-devices/logitech_g900.xml):

<hostdev mode="subsystem" type="usb" managed="no">
    <source>
        <vendor id="0x046d"/>
        <product id="0xc539"/>
    </source>
</hostdev>

After a week of gaming on this setup several hours per day, I have yet to find anything to complain about! It’s buttery smooth, and with the game mode script it’s quite convenient to start/stop gaming, much more so than dual-booting.

I did purchase a used GTX 1080 today to replace the GTX 1060 3GB I was using for the Wintendo VM, and it improved the experience quite dramatically. I’m still limited to 1920x1080 if I want a consistent 140+ FPS though… But it will have to do until the GTX 3080 Ti comes out :money_mouth_face:

3 Likes