Return to

ASRock x470 Taichi (VM Iommu Grouping question)


I recently bought this, (Waiting for delivery)

Just curious before I start buying some parts, Anyone know the groupings for NVME’s since it has 2 m.2 ports. I was thinking of doing this… Let me know if you see any issues.

I was planning to buy 2 500 GB Nvme’s for each m.2 and pass one m.2 to dedication for VM as its own drive. Would this work with the IOMMU groupings?

I have a low wattage Workstation GPU (firepro w600) for my Linux Host and a RX 580 for the VM Windows Guest.


The top and middle PCIe slots, and the M.2 slot next to the top PCIe slot get their own group and lanes from the CPU. The bottom M.2 slot is provided via the chipset, and cannot be passed through.

Also, the USB ports in the TPM chip’s group can also be passed through, but if you turn off the VM, then you will need to do a host reboot to get them to work again (VM will hang if VM starts back up again before this). Everything else is part of the chipset, and generally cannot be passed through. I’m using VMware ESXi and toggling for passthrough on anything other than the above stated will just keep saying the host needs to be rebooted, but it’s possible VFIO will work better.


So, I can pass top PCIE and M.2 to Passthrough for windows, and use the bottom and a normal PCIE 2nd and M.2 for say linux host? My GPU for the linux host is a firepro workstation card so shouldnt be an issue in a lower tier slot.

Hmm for the USB’s, what if you buy a USB pci card for that, would that help solve the problem? Or will still need a reboot as it would be locked/assigned to the vm?


Yes, if you put the Windows GPU in the top slot, the USB Card in the middle slot, and the Firepro for the Linux host in the bottom slot, you’ll be able to pass through the WIndows GPU, USB card, and top NVMe drive to the VM and be able to restart it all you like.

I am running the x470 Taichi Ultimate (has a 10g NIC, only dif) and need that middle slot for an HBA to pass through to my FreeNAS VM, so I needed to use the 2 USB ports onboard in the TPM chip’s IOMMU group to make my all in one box work (FreeNAS, pfSense, and Gaming VM all in one running on VMware ESXi). I’ve tried the PCIe 1x slots, but since they were on the chipset, I wasn’t hopeful. Low and behold, they can’t be passed through.


Sad to hear that its not working out, You think this could work?


I’ve never tried, so I cannot say for certain. If I had to guess, you could pass through that entire card, and all the devices attached to it, but I’d bet you would have the reset bug where if you restart the guest OS it’s attached to, it won’t do so properly.

Still, if you end up giving it a shot, let me know the results, I’d be VERY interested in them :slight_smile:

Also, my rig is working, it’s just I need to start it up and shut it down a particular way, and cannot restart it easily. Troublesome when troubleshooting, but not that bad once working :stuck_out_tongue:


@2bitmarksman Just about to start passing stuff through, I had to Put my FirePro workstation GPU in position 1 slot, and RX 580 in the middle slot. As with a 2nd M.2 it deactivates the last x16 slot as m.2 takes over its lane.

Leaves me with a x1 slot between the gpu’s which I can use for something to also passthrough.

I assume as we were mentioning before in the previous posts about the USB rebooting because of lock. I’ll need to buy a x1 USB Controller card. Put the KVM Switch on that card and w/e else I want to use on windows so it doesnt have issues such as needing a reboot.

I assume I’d put the KVM USB lead with HDMI (gpu port) and USB on this USB Controller card as its assigned to windows on the x1 slot, Then plug my keyboard and mouse into the KVM along with my Wireless USB Dongle for my headset. So when It swaps back and forth it also passes over the Audio. as the Other HDMI will be on linux host card, along with its USB connected to host USB port. Correct me if im wrong.


If you mean plug in the USB card into a 1x slot that won’t be passthrough AFAIK. Those are provided via Chipset. The Firepro would need to be in the bottom 4x PCIe 2.0 slot and the USB card in either the top or middle 16x slots.


Hmm, This isnt good…

I have two M.2’s the last m.2 takes over the PCIE5 slot and disables it. So Unless I use a x1 slot for the M.2 Which doesnt need a passthrough for host. Just wondering what the speeds would be. The M.2 drive im using isnt super fast.


I guess I’m in the same boat as you, with the USB plugs then. As I’ll need to restart the host to unlock with this setup.

I guess I could return the Bottom m.2 drive still in amazon refund date. But dont really wanna do that as I like the boot speed for linux and windows haha.

Got any ideas?


M2_1 = Windows NVME
PCIE1 = GPU: Fire Pro (Linux Host)
PCIE3 = GPU: RX 580 (Windows Guest)
M2_2 = Linux NVME - Since M2_2 is used, PCIE5 is disabled as per mobo manual states.

So I’m either going to have to put the Linux M2 which is in slot M2_2 in an adapter card in PCIE2 x1, but if i do that. Its probably not bootable is it? (Not sure never done that before)

Prob better idea to return the 2nd M.2, get a normal SSD 2.5 I think. But then again i like the speeds, so I guess its either that or be stuck with the restart host issue.

Edit – Made my mind up.
Going to sell my M.2 to gain access back to my PCIE5 slot, for USB controller card.

Friend needed an upgrade on his Laptop M.2 so works out well, ill just buy a cheap 500gb Samsung/Crucial SSD 2.5 drive for linux or windows. Question is which one you think would benfit more from the speed. Windows in VM or Linux. Though I guess we cannot pass through Sata can we? @2bitmarksman?


I’m unsure how KVM/QEMU handles SATA passthrough. I’m currently using VMware ESXi as my hypervisor, and that requires iSCSI addressing on the drives to straight passthrough the SATA drives. Again, unsure how QEMU handles things.

I would passthrough the NVMe drive to windows and the SATA drive to Linux. I personally don’t notice much of a difference with either one, but since Windows is such a hog, I’d put the NVMe drive on it.

I would setup your layout to:

M2_1 = Windows NVMe
PCIE1 = RX 580 for Windows
PCIE3 = USB Controller
PCIE5 = Firepro for LInux


Yeah thats what i was going to do. Got a friend coming to upgrade his PC ill buy new SSD and USB Controller at the same time. Thanks for insight again.


You got any good tips on USB Controllers? So many to choose from not entirely sure which is good tbh.


Not much honestly. More PCIe lanes it takes up means more bandwidth, but most of them are 1x PCIe 3.0 anyway. Just get one with all the ports you need :stuck_out_tongue:


Some on Amazon state shit like doesnt work well with linux, I duno why the fuck that would be the case.


Not sure. AFAIK, since you’re passing through the device to a Windows VM, a linux driver shouldn’t matter


True, will be run on windows because of that passthrough right should be fine, not going to use those Sata extra power things, going for a solid molex connector if it actually needs more power. Found a 4 port 3.0 usb for like 17$, thanks for ya help. Once this stuff arrives in a few days ill be putting it all together.


Good luck, fingers crossed that things just work :stuck_out_tongue:


Took a bit longer than expected to get all my parts, But I picked up a Rosewill USB controller considering they are a pretty reputable brand vs some random China controller.

Got it all configured as we spoke about previously in the thread. Now I need to start the passthrough work.

Gotta figure out all the IOMMU Groupings/ID’s and start that process, Not entirely sure how to do the Passthrough of a USB Controller, I guess its like a GPU device pretty much? @2bitmarksman Sorry for so many questions haha. Just trying to rationalize in my head before i start throwing shit into cli.


Yes it’ll be the same process as passing through the GPU. Again though, I have no experience with KVM/QEMU for setting anything up, just the top level stuff that should carry over to other hypervisors