Proxmox Host with Unraid VM - HBA passthrough - Shared IOMMU groups

Hello all,
Hoping someone would be kind enough to assist me here.

Background:
I have a Proxmox node running on a Ryzen 2500/Aorus B450 Pro Wifi with F60 Bios.
I intend to run Unraid in a VM, passing through an LSI HBA to connect my storage array.
(I know i know… Many people think that is silly. I specifically want the storage capabilities. I will not use the hypervisor in unraid.)

Problem:
My HBA is in the same IOMMU group as all other PCIe devices slotted in the MB.

Efforts:
Here is what I have done so far.
I followed the documentation “Pci passthrough - Proxmox VE” to enable PCIe passthrough.

  1. Enable ACS in Bios
  2. Enabled IOMMU in Bios
  3. edited “/etc/default/grub” and “/etc/kernel/cmdline” to include " amd_iommu=on iommu=pt" and “quiet amd_iommu=on iommu=pt” respectively.
    (I know that only systemd was necessary, but when it didn’t work, I went ahead and edited grub too.)
  4. edited /etc/modules to include vfio, vfio_iommu_type1, vfio_pci, and vfio_virqfd
  5. I added "“pcie_acs_override=downstream” to the grub and systemd-boot command lines.
    Below are relevant outputs. If anyone is able/willing to advise, I would be grateful. Let me know if you’d like any more information.
    I am about 3 months in with linux, and am still a CLI baby, so go easy on me ok? I’m learning!

root@prox:~# dmesg | grep -e DMAR -e IOMMU
[ 0.513735] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported
[ 0.514514] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 0.514843] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).

root@prox:~# dmesg | grep ‘remapping’
[ 0.514519] AMD-Vi: Interrupt remapping enabled

My groups are as follows, with “0000:07:00.0” being the LSI HBA. It is on group 11 with every other PCIe device.

root@prox:~# find /sys/kernel/iommu_groups/ -type l
/sys/kernel/iommu_groups/17/devices/0000:0a:00.3
/sys/kernel/iommu_groups/7/devices/0000:00:08.0
/sys/kernel/iommu_groups/15/devices/0000:0a:00.0
/sys/kernel/iommu_groups/5/devices/0000:00:07.0
/sys/kernel/iommu_groups/13/devices/0000:09:00.2
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/11/devices/0000:03:00.0
/sys/kernel/iommu_groups/11/devices/0000:02:07.0
/sys/kernel/iommu_groups/11/devices/0000:05:00.1
/sys/kernel/iommu_groups/11/devices/0000:02:00.0
/sys/kernel/iommu_groups/11/devices/0000:08:00.0
/sys/kernel/iommu_groups/11/devices/0000:01:00.2
/sys/kernel/iommu_groups/11/devices/0000:01:00.0
/sys/kernel/iommu_groups/11/devices/0000:02:06.0
/sys/kernel/iommu_groups/11/devices/0000:07:00.0
/sys/kernel/iommu_groups/11/devices/0000:06:00.0
/sys/kernel/iommu_groups/11/devices/0000:05:00.0
/sys/kernel/iommu_groups/11/devices/0000:02:08.0
/sys/kernel/iommu_groups/11/devices/0000:01:00.1
/sys/kernel/iommu_groups/11/devices/0000:04:00.0
/sys/kernel/iommu_groups/11/devices/0000:02:01.0
/sys/kernel/iommu_groups/11/devices/0000:02:04.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.3
/sys/kernel/iommu_groups/8/devices/0000:00:08.1
/sys/kernel/iommu_groups/16/devices/0000:0a:00.2
/sys/kernel/iommu_groups/6/devices/0000:00:07.1
/sys/kernel/iommu_groups/14/devices/0000:09:00.3
/sys/kernel/iommu_groups/4/devices/0000:00:04.0
/sys/kernel/iommu_groups/12/devices/0000:09:00.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/10/devices/0000:00:18.3
/sys/kernel/iommu_groups/10/devices/0000:00:18.1
/sys/kernel/iommu_groups/10/devices/0000:00:18.6
/sys/kernel/iommu_groups/10/devices/0000:00:18.4
/sys/kernel/iommu_groups/10/devices/0000:00:18.2
/sys/kernel/iommu_groups/10/devices/0000:00:18.0
/sys/kernel/iommu_groups/10/devices/0000:00:18.7
/sys/kernel/iommu_groups/10/devices/0000:00:18.5
/sys/kernel/iommu_groups/0/devices/0000:00:01.0
/sys/kernel/iommu_groups/9/devices/0000:00:14.3
/sys/kernel/iommu_groups/9/devices/0000:00:14.0

GNU nano 5.4 grub

If you change this file, run ‘update-grub’ afterwards to update

/boot/grub/grub.cfg.

For full documentation of the options in this file, see:

info -f grub -n ‘Simple configuration’

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=lsb_release -i -s 2> /dev/null || echo Debian
GRUB_CMDLINE_LINUX_DEFAULT=“quiet amd_iommu=on iommu=pt”
GRUB_CMDLINE_LINUX=""
pcie_acs_override=downstream,multifunction

Uncomment to enable BadRAM filtering, modify to suit your needs

This works with Linux (no patch required) and with any kernel that obtains

the memory map information from GRUB (GNU Mach, kernel of FreeBSD …)

#GRUB_BADRAM=“0x01234567,0xfefefefe,0x89abcdef,0xefefefef”

Uncomment to disable graphical terminal (grub-pc only)

#GRUB_TERMINAL=console


GNU nano 5.4 cmdline
quiet amd_iommu=on iommu=pt
pcie_acs_override=downstream,multifunction

GNU nano 5.4 modules

/etc/modules: kernel modules to load at boot time.

This file contains the names of kernel modules that should be loaded

at boot time, one per line. Lines beginning with “#” are ignored.

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

TIA
-Paul

Well, I was unable to pass the HBA through to the VM. I knew I had the option to pass the HDDs through individually by serial number, but I wanted to avoid that. I figured the straightest path between Unraid and the drives would result in the fastest speeds. I also didn’t want the hypervisor to play a role in passing these through to reduce any recource overhead. I guess we’ll see how it performs after parity is built.
For anyone in the future who wanders across this thread, here is how I passed the HDDs through.

https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)

Best of luck!
-Paul

1 Like

I just created my account to let you know that your grub file is wrong.
Not sure if after all this time you already figured it out, but the syntax is as follow:

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on pcie_acs_override=downstream,multifunction"
GRUB_CMDLINE_LINUX=""

as you see the pcie_acs_override, etc goes as part of the CMD value, and not as a separate file in the config