Can't Seem To Get IOMMU Working

I am setting up a Proxmox server that will host TrueNAS Scale in a VM. I have 5 drives I want to passthrough to TrueNAS, but setting up IOMMU is proving difficult. I want to preface this with saying I’m a novice

I’ve followed the PCI Passthrough guide on Proxmox’s Wiki and whether it’s a fresh install of Proxmox with no changes, or making all the changes they list, and even additional settings found from other users with similar issues, yields the same results.

No matter what, anytime I run dmesg | grep -e DMAR -e IOMMU this is always the output:

[    0.657987] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[    0.661309] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[    0.661497] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).

Additionally, running for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done to list out the IOMMU Groups, this is what it outputs:

IOMMU group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 10 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 11 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU group 12 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 61)
IOMMU group 12 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU group 13 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 0 [1022:1440]
IOMMU group 13 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 1 [1022:1441]
IOMMU group 13 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 2 [1022:1442]
IOMMU group 13 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 3 [1022:1443]
IOMMU group 13 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 4 [1022:1444]
IOMMU group 13 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 5 [1022:1445]
IOMMU group 13 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 6 [1022:1446]
IOMMU group 13 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse/Vermeer Data Fabric: Device 18h; Function 7 [1022:1447]
IOMMU group 14 01:00.0 SATA controller [0106]: JMicron Technology Corp. JMB58x AHCI SATA controller [197b:0585]
IOMMU group 15 02:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset USB 3.1 XHCI Controller [1022:43ee]
IOMMU group 15 02:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset SATA Controller [1022:43eb]
IOMMU group 15 02:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 500 Series Chipset Switch Upstream Port [1022:43e9]
IOMMU group 15 03:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43ea]
IOMMU group 15 03:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43ea]
IOMMU group 15 03:08.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43ea]
IOMMU group 15 03:09.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43ea]
IOMMU group 15 05:00.0 Non-Volatile memory controller [0108]: Intel Corporation NVMe Optane Memory Series [8086:2522]
IOMMU group 15 06:00.0 Network controller [0280]: Intel Corporation Wi-Fi 6 AX200 [8086:2723] (rev 1a)
IOMMU group 15 07:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller I225-V [8086:15f3] (rev 03)
IOMMU group 16 08:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU117GL [T1000 8GB] [10de:1ff0] (rev a1)
IOMMU group 16 08:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:10fa] (rev a1)
IOMMU group 17 09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function [1022:148a]
IOMMU group 18 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485]
IOMMU group 19 0a:00.1 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP [1022:1486]
IOMMU group 1 00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU group 20 0a:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
IOMMU group 21 0a:00.4 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller [1022:1487]
IOMMU group 2 00:01.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU group 3 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 4 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 5 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU group 6 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 7 00:05.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 8 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 9 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]

In doing research for this, I’ve read a lot about B550 motherboards being hit or miss when it comes to IOMMU support, but I’ve also seen it can work/stop working depending on your BIOS version. I’ve also seen similar issues come up because of an issue with the Kernel.

I’ve attempted a few different BIOS revisions as well as trying both Proxmox 8.1-1 and 8.0-2.

I very well could be missing something or it could be as simple as my motherboard not having proper support for IOMMU. Any help or guidance is appreciated!

My system specs are:
AMD Ryzen 3600x
ASUS ROG STRIX B550-I GAMING (Currently on latest BIOS version)
Jonsbo N1 Case

What all steps have you taken? Give us the step by step changes you made to make it work?

ie Changed bios setting this
Changed config file here to this

https://pve.proxmox.com/wiki/PCI_Passthrough
assuming this is the guide you were using?

Yes, that was the guide I was following.

In Proxmox - I’ve attempted editing the Grub line to GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on", adding iommu=pt and pcie_acs_override=downstream, and added all those parameters to systemd-boot to cover all bases. After every change I made, I’ve made sure to run either update-grub or proxmox-boot-tool refresh and do a reboot. I always initiated the reboot from the shell.

For the BIOS - I’ve attempted running the latest two (3402 & 3301) and first two versions (0205 & 0402) of the firmware. No real rhyme or reason for it, just haven’t decided to go through all versions yet. My gut isn’t confident that will help this issue, but if it comes to it I will try them all to rule it out.
On each BIOS version, I’ve made sure that SVM, IOMMU, and ACS are all Enabled. CSM has also always been disabled and Proxmox has always been installed in UEFI mode.

A couple of things I forgot to mention is when I run dmesg | grep 'remapping' this is also the output I always get:

[    0.438446] x2apic: IRQ remapping doesn't support X2APIC mode
[    0.662012] AMD-Vi: Interrupt remapping enabled

I also have an Nvidia T1000 GPU installed as well that I planned on passing through at a later date, but I’m not sure if having the one PCIe slot populated on this board has something to do with it?

1 Like

Another good guide you can reference I have used before
https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/

Step 2: VFIO Modules
seems like missing that part

There were some good nuggets in the following video when it came to passing through HDDs. Perhaps this can help you. If you can see the drives in Proxmox, then making the configuration changes to the file might help you pass them to the VM. Although, one word of advice I found in the comments of that video was to configure with the drives UUID instead of the drive letter. That seems like sound advice.

2 Likes

If you want to skip the vfio and just assign the physical disks to the TrueNAS VM (that’s what I do) you can use this guide:

https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)

If you are backing up your VMs in proxmox, after this is set up be sure to uncheck the “backup” box under the advanced options in each drive to prevent proxmox from backup up all 5 drives worth of data every time.

I ended up with:


Screenshot_20231127_052558

1 Like

I’m in business! Following Whizdumb and axavio’s suggestions is all I needed! And thanks axavio for the tip of disabling “Backup” in Proxmox as I wouldn’t have thought to check that.

I appreciate it!

3 Likes