Two identical USB controllers, second VM never boots

I bought a Startech 7-port USB PCIe cards with the Renesas chipset and everything works just fine when I pass it through to a VM.

I figured buying another one just like it would allow me to have another VM with USB hotplug capabilities.

But unfortunately that isn’t the case. I’m able to successfully start and boot into windows for the first VM. When I start the second, the USB card seems to disconnect, but memory never gets allocated and nothing seems to happen.

I thought something in the other VM might be causing it to hang like a conflicting device, but that was not the case. I double checked all devices and verified that both VMs boot and run perfectly without USB controllers passed-through.

Any idea why this would happen? If i can pass through two identical GPUs to two different VMs, why wouldn’t the USB controller cards work?

lspci -nnv

shows this:

81:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014] (rev 03) (prog-if 30 [XHCI])
Subsystem: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014]
Physical Slot: 6
Flags: bus master, fast devsel, latency 0, IRQ 25, NUMA node 1
Memory at fb200000 (64-bit, non-prefetchable) [size=8K]
Capabilities:
Kernel driver in use: vfio-pci

82:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014] (rev 03) (prog-if 30 [XHCI])
Subsystem: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014]
Physical Slot: 5
Flags: bus master, fast devsel, latency 0, IRQ 26, NUMA node 1
Memory at fb100000 (64-bit, non-prefetchable) [size=8K]
Capabilities:
Kernel driver in use: vfio-pci

I am always able to start one device or the other, but the problem repeats itself for the other VM, so that ruled out suspicion that one particular VM is causing an issue.

Could it be that I just got lucky with passing through the GPUs, and that i should always be aiming to use different hardware for each VM?

On a final note, I’m using the ASUS z10pe-d8 WS motherboard, in which both my USB cards are in slot 5 and slot 6. both of these slots are controlled by CPU2, but my VMs are pinned to cores that are independent of each physical processor. I’ll try to move the cards to a different slot and see if that does anything…

I am no expert, noob even, but is it because they are identical? It cannot “tell them apart” so to speak. Like trying the use two of the same GPUs for host and pass through causes conflicts?

What I meant earlier was that I passed through two identical GPUs to different VMs. I still have a different third GPU for the host.
Having done this with GPUs, i thought the USB cards wouldn’t be a problem.

Very sorry I jumpped the gun. Did not even read that part about the GPUs before, that was just my.own example.

Sorry I have to defer to more knowledgeable people at this point.

Have you double checked your IOMMU groups if everything is still isolated? The USB cards might be in the same group.

Yes, I have. The Code block from the original post was grabbed from lspci -nnv after everything was installed in the pci-e slots. all GPUs and USB cards are in separate groups.

Do try moving cards into different slots, and which card(s) are attached to each VM.

IDK what the groups are like or what the chipset is, but you could try passing through the motherboard USB controllers and leaving one of the add-in USB cards for the host.