B550 & IOMMU Issues

Got a tricky one here. I think I am sol with this one and will need a x570 board.

I am running an Asus ROG Strix B550-F Gaming WiFi II with an Ryzen 5 5600G

This has Proxmox with some VM’s including PFSense and a Debian for game servers, web etc.

I wanted to run Plex on it as well and decided to grab a 1050Ti for video transcoding but have hit an issue I assume is down to the IOMMU grouping.

I have a Dual port Intel 2.5g NIC in a PCIe slot and this is passed through to PFSense. All works fine.

I added the GPU to the mix and well… its all gone wrong.

No matter what combination I use for the 2 cards in Proxmox will crash when starting up one of the VM’s. So either pfsense will start fine but then prox will crash when i start debian (with the gpu passed to it ofc) or debian can start but pfsense starting will crash proxmox. Just depends what slots the cards are in.

But either way, they wont work at the same time.

I quick look at the IOMMU grouping in prox shows that the NIC and GPU are not in the same group. NIC in group 11 and GPU in group 6 for example. If I am reading it right that is.

Please bear in mind I am fairly new to Proxmox and only learned to get it set up to do what I wanted. Same goes for Linux distros. I can get things set up to how I need but that about it. I can use terminal etc but my experience is limited to somewhere between Noob and Kinda know what i am doing.

If this was all on Windows id be off to the races! xD

Any input would be helpful unless like I said at the start, I’m just boned due to the B550 chipset and its rather shit implementation of IOMMU

Log in to the Proxmox console and check if the vfio driver is used for the GPU and NIC:

lspci -vv | less

Hit / and enter search string to find the card in question (e.g. “/1050<enter>”)
Scroll down looking for the line “Kernel driver in use:” - it should say vfio.

I did change it to VFIO.

The problem is the IOMMU groups. No matter what slot i use i end up with 2 devices sharing a group causing me not to be able to split them to different VM’s

Currently I have a 2 port NIC in the top x16 slot. This shares an group with some other devices that i dont care about. The onboard NIC that I use for Proxmox is also in another group so this works to pass the 2 port nic to pfsense.

If I add a GPU to the other x16 slot it then pops up in the same group as the onboard nic. so I cant pass the gpu to debian.

If i put the gpu in the top slot, I have to move the 2 port nic to another slot so then it shares a group with the onboard nic.

It seems all PCIE slots share the same group with the onboard nic.

It seems it will be a lot easier to get an X570 mobo that “should” have better IUMMO group seperation.

Can confirm, X570 motherboard now in use and IOMMU grouping is much better and now all my network cards and gpu are all in separate groups.

Hehe, I’m glad you got it working with X570 but be warn, you might still get it to tip over :)). I’m using x570 Taichi and over time ended up populating all the slots and most other wholes I found on the motherboard, but until I got it in the current config, there was a point when at every reboot some components or others would not show up probably because of how the mobo was splitting/assigning the PCIe resources. By moving the cards around, I got it in a stable config. Cards are RTX3080, nvme card, network card, Sata split card, both nvme on mobo used, most sata on mobo used.

Oh yeah, if i move the cards around the grouping changes and I have to reassign the cards in proxmox via CLI.

I am using a MPG X570S CARBON MAX WIFI now. There is only 1 pcie slot left plus the 3 M.2 slots as I am using SATA SSD’s for the moment.

Not sure how things will change if I get M.2 SSD’s so will need to read the manual.

This mobo was a good deal… ish. It was on ebay for spares. Sold as not working. I got it working with BIOS flashback so thats a win. But I noticed the onboard NIC does not work. Good job I have a spare 2.5gb NIC!