The Elephant in the room: AM4/TR4 Chipset IOMMU separation

So I want to discuss this because it has been quite a while and chipset IOMMU separation still isn’t resolved, meaning you basically have to use the direct lanes to the CPU on Threadripper if you want to do something like 2 GPU Passthrough, and a Blackmagic capture card and USB 3.0 controller all in one.

With chipset IOMMU Hardware ACS that works properly on X79, X99 and X299, you can throw a USB controller on x1 ports and still be able to properly grab it without a special snowflake patched kernel. x4 ports and Blackmagic cards would work too.

When will AMD get up to the same level? This is genuinely what’s holding me back from getting a 2920X and a Threadripper Mobo for 2 systems in 1. Windows VM guest for gaming, and Linux for streaming/everyday use. If I could separate the NICs on multi-NIC motherboards, and just plug a Renesas USB 3.0 card into the mobo’s Chipset x1 port, that’d be a killer VFIO setup.

2 Likes

Am I mistaken in thinking ASRock has it right with the x399 taichi? I was actually looking this up earlier today to pass my raid card to a VM.

1 Like

Chipset still groups everything together in Group 13. The NICs aren’t separated.

Bumping cause is there anything in the mailing lists that I don’t know about? Hopefully there is something on providing better ACS for the AMD chipsets.

The last update to PCI-E ACS was AGESA 1.0.0.6 and we heard nothing from AMD since then.

Native pcie devices including that weird thing where nvme+8x were grouped together is basically fixed. Chipset remains an issue but I suspect this may be a lokitstion of asmedia. Acs patch works fine for nice and USB.

Z390 has no iommu separation for CPU devices, opposite problem. Which suuuuuucks.

Really? Wow. So looks like X299 reigns king still vs Z390.

Also, if the Ryzen/Threadripper chipset is Asmedia made, Asmedia seriously does need to update the firmware for it within the AGESA updates for ACS.

This is kind of a disadvantage of contracting someone to make the chipset when they’re hard to work with for QoL upgrades down the line with stuff like IOMMU. The solution I guess is for AMD to fully fab the chipset, down to the low level function level.

With your two GPUs, the capture card and the USB Controller, the four slots on Threadripper should be sufficient for you. Passing through a 1GbE NIC is not really necessary. If you want to use a separate network jack for your VM you can either connect a USB NIC to the pass-throughed USB Card or create a network bridge containing the second nic and the tap device of your VM.

I am running a pretty similar setup right now. Just with an Infiniband card instead of a capture card. And I am connecting every additional hardware that the VM needs via USB (Input, Sound, Network, Camera). Since USB 3.0, almost everything can be connected via USB these days. :3

Yeah, about the NICs, I still need to passthrough a physical NIC to a pfsense VM within the QEMU network, and let the QEMU network be the sanitized LAN for Windows 10, so that nothing gets out of a physical port.

I’m keeping in mind if someone is paranoid enough to not allow Windows 10 at a LAN party. If I sanitize the NIC output for Windows 10 traffic, it’s one less thing the LAN party holder needs to worry about.

The Aquantia NIC, which you think would be connected directly to the CPU, is still through the chipset on some Mobos that come with it included.

And sometimes, I do use the x1 port to dedicate the USB controller to that. Wasting a x8 doesn’t make sense, especially if you were crazy enough to SLI your VM.

Yeah, about the NICs, I still need to passthrough a physical NIC to a pfsense VM within the QEMU network, and let the QEMU network be the sanitized LAN for Windows 10, so that nothing gets out of a physical port.

You could also assign a USB NIC connected to a chipset USB port to your pfSense VM using "-device usb-host"ᵀ. That would be a physical port. Hope that helps.

And sometimes, I do use the x1 port to dedicate the USB controller to that. Wasting a x8 doesn’t make sense, especially if you were crazy enough to SLI your VM.

Actually, I like to use one of those PCIe 2.0 x4 cards with four USB 3.0 jacks. Here each USB 3.0 jack could actually use its full bandwidth at once. That will probably never happen, but having the ability just feels more proper. :wink:

https://qemu.weilnetz.de/doc/qemu-doc.html#usb_005fdevices

I believe those cards have trouble being separated into separate IOMMU ports for each port’s controller. It’s the whole card all at once, or nothing. The PLX chip has no ACS.

I’m one of those people that don’t believe in USB “packets” vs FireWire and streaming data (which I think PCI-E would behave similarly) Kinda want a PCI-E direct access NIC. Why do you think Thunderbolt HDDs and SSDs replaced FireWire 400 and FireWire 800 drives for the streaming audio/music creation people?

I believe those cards have trouble being separated into separate IOMMU ports for each port’s controller. It’s the whole card all at once, or nothing. The PLX chip has no ACS.

Of course, you pass through the whole card, just like you would do with the PCIe x1 cards.

I’m one of those people that don’t believe in USB “packets” vs FireWire and streaming data (which I think PCI-E would behave similarly) Kinda want a PCI-E direct access NIC.

Okay, whatever makes you feel better. :slight_smile:

Recently had this trouble myself with the AsRock X470 Taichi Ultimate. The majority of motherboard provided IO is in a single IOMMU group, except for 2 USB ports and the Security chip. Those are in their own group and are able to be passed through.

For pfSense, are you able to logically seperate your two NICs and then give the pfSense VM two virtual NICs, one on each segment? This is what I had to get around not being able to passthrough any NICs on my pfSense VM

I’m just theorizing what I could do if passthrough of a physical NIC works with a QEMU based pfsense VM. Haven’t tried it yet.

Furry,

With the iommu grouping on a mobo like x370 Taichi you can probably do what you want or most of it. As an example from my X370 gaming here is the listing with a card installed in both full sized slots, and also nvme drive loaded.

As shown I have the capability to pass through both 8x “GPU” slots, the CPU connected M.2 4x slot, the hd audio, and USB controller. (also some sata ports)

While I haven’t yet tried passing through 2 GPUs, I have tried passing in a GPU and another device in the 8x slot, in addition to the USB and audio simultaneously. USB passthrough gets 4 dedicated ports on the back. No ACS is needed for this. At some point I plan to try the dual gpu and maybe even the nvme slot passthrough (that requires swapping nvme drives around, so a pain)

IOMMU Group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 1 00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 2 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 3 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 4 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 5 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 6 00:03.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 7 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 8 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 9 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 10 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 11 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 12 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 59)
IOMMU Group 12 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU Group 13 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1460]
IOMMU Group 13 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1461]
IOMMU Group 13 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1462]
IOMMU Group 13 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1463]
IOMMU Group 13 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1464]
IOMMU Group 13 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1465]
IOMMU Group 13 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric Device 18h Function 6 [1022:1466]
IOMMU Group 13 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 [1022:1467]
IOMMU Group 14 01:00.0 Non-Volatile memory controller [0108]: Toshiba America Info Systems Device [1179:0115] (rev 01) [M.2 slot]
IOMMU Group 15 03:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b9] (rev 02)
IOMMU Group 15 03:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b5] (rev 02)
IOMMU Group 15 03:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b0] (rev 02)
IOMMU Group 15 1d:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 15 1d:02.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 15 1d:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 15 1d:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 15 1e:00.0 Ethernet controller [0200]: Aquantia Corp. AQC108 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] [1d6a:d108] (rev 02)
IOMMU Group 15 20:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 02)
IOMMU Group 15 21:00.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1184]
IOMMU Group 15 26:01.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1184]
IOMMU Group 15 26:03.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1184]
IOMMU Group 15 26:05.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1184]
IOMMU Group 15 26:07.0 PCI bridge [0604]: ASMedia Technology Inc. Device [1b21:1184]
IOMMU Group 15 27:00.0 Network controller [0280]: Intel Corporation Device [8086:24fb] (rev 10)
IOMMU Group 15 28:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 460] [1002:67ef] (rev e5)
IOMMU Group 15 28:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:aae0]
IOMMU Group 15 2a:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 15 2c:00.0 Non-Volatile memory controller [0108]: Intel Corporation Device [8086:f1a5] (rev 03)
IOMMU Group 16 2d:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:1470] (rev c1)
IOMMU Group 17 2e:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:1471]
IOMMU Group 18 2f:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Vega [Radeon RX Vega] [1002:687f] (rev c1)
IOMMU Group 19 2f:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Device [1002:aaf8]
IOMMU Group 20 30:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller [1b21:1242][PCIe 8x slot passthrough]
IOMMU Group 21 31:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:145a]
IOMMU Group 22 31:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]
IOMMU Group 23 31:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] USB 3.0 Host controller [1022:145f]
IOMMU Group 24 32:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:1455]
IOMMU Group 25 32:00.2 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 26 32:00.3 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller [1022:1457]

Bill

Yes, but my problem is I need direct access to the onboard NICs and that’s still lumped together.

I currently run X79 and that’s not a problem. I can even run a proper USB 3.0 controller like this: