PROXMOX: No IOMMU detected, please activate it

Hello, I have a dual socket Xeon Gold 6152 server with PROXMOX 7.1-7 installed and the system refuses to recognize that I’ve enabled IOMMU groups for hardware pass-through.

I’ve enabled VT-d in the BIOS.
I’ve added intel_iommu=on to GRUB_CMDLINE_LINUX_DEFAULT
I’ve ran proxmox-boot-tool refresh

The server persists in telling me I haven’t enabled it:

Screenshot from 2021-12-27 20-15-18

What’s strange is I’m having the same issue on the old hardware I’m looking to replace. A dual Xeon 2698v3 server. Running PROXMOX 7.1-4

What confuses me even more is I have an AMD EPYC 7601 server running 7.1-4 and it’s passing hardware devices through with no problems at all.

And I can’t figure out where the disconnect is for the life of me. The only differentiating factor I see is that AMD EPYC is supposed to have IOMMU enabled by default but you have to enable it for Intel Xeon.

So what’s missing? Any help would be appreciated.

did you add intel_iommu=on to /etc/kernel/cmdline before running that command?

You’re talking about Systemd. I haven’t tested if that’s what the latest version of PROXMOX relies on. Up to this point every version of PROXMOX I’ve used and Debian distribution using the same underlying OS rely on GRUB in /etc/default/grub. I ran proxmox-boot-tool refresh after editing /etc/default/grub because PROXMOX has done away with update-grub.

Right now the new hardware is basically a test server so I could try the systemd method and see if it takes me anywhere. As I understand it you use one or the other not both.

I’ve only used 7.1 and that’s what works for me. but i suppose it depends on how it’s installed

I just went along with however PROXMOX wanted to install the OS with a ZFS mirror. I didn’t customize anything else like specifying a bootloader. Figured it’d be GRUB by default.

Does this look correct to you? I’ll give it a try:

root@intel:~# cat /etc/kernel/cmdline 
root=ZFS=rpool/ROOT/pve-1 boot=zfs
intel_iommu=on

i have it as a single line, not sure if that matters, but yeah

It did not work unfortunately. I tried it with both GRUB & Systemd together and I tried Systemd by itself. PROXMOX is still complaining that IOMMU isn’t detected.

You’ve enabled IOMMU in the BIOS?

To my understanding yes.

VT-d is different to IOMMU

Intel → VT-d
AMD → IOMMU

This is how they’re addressed under their respective platforms. I’ve not seen IOMMU in an Intel board BIOS. Have seen it in AMD though and have used it for hardware pass-through.

To add to some of the confusion. The other Xeon server where IOMMU groups aren’t working. They were working prior to upgrading the OS and BIOS. Afterwords it stopped and the same BIOS settings that worked before now don’t.

dmesg | grep -i IOMMU

and paste here.

[    0.973820] DMAR-IR: IOAPIC id 12 under DRHD base  0xc5ffc000 IOMMU 6
[    0.973824] DMAR-IR: IOAPIC id 11 under DRHD base  0xb87fc000 IOMMU 5
[    0.973827] DMAR-IR: IOAPIC id 10 under DRHD base  0xaaffc000 IOMMU 4
[    0.973829] DMAR-IR: IOAPIC id 18 under DRHD base  0xfbffc000 IOMMU 3
[    0.973832] DMAR-IR: IOAPIC id 17 under DRHD base  0xee7fc000 IOMMU 2
[    0.973834] DMAR-IR: IOAPIC id 16 under DRHD base  0xe0ffc000 IOMMU 1
[    0.973837] DMAR-IR: IOAPIC id 15 under DRHD base  0xd37fc000 IOMMU 0
[    0.973840] DMAR-IR: IOAPIC id 8 under DRHD base  0x9d7fc000 IOMMU 7
[    0.973843] DMAR-IR: IOAPIC id 9 under DRHD base  0x9d7fc000 IOMMU 7
[    2.186410] iommu: Default domain type: Translated

Also, try:

find /sys | grep dmar

EDIT: looks like its enabled.

You ran update-grub after changing the GRUB variable? If you check the command line option when it boots, do you see the iommu in the line?

Edit 2: presumably if we’re seeing IOMMU devices its enabled. Do you need vfio installed for Proxmox to detect IOMMU maybe? What is cat /etc/modules ? It should be:

vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

for PCI-E passthrough.

Yep:

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX=""
root@intel:~# update-grub
Generating grub configuration file ...
W: This system is booted via proxmox-boot-tool:
W: Executing 'update-grub' directly does not update the correct configs!
W: Running: 'proxmox-boot-tool refresh'

Copying and configuring kernels on /dev/disk/by-uuid/2666-320F
        Copying kernel and creating boot-entry for 5.13.19-2-pve
Copying and configuring kernels on /dev/disk/by-uuid/2666-711D
        Copying kernel and creating boot-entry for 5.13.19-2-pve
Found linux image: /boot/vmlinuz-5.13.19-2-pve
Found initrd image: /boot/initrd.img-5.13.19-2-pve
Found memtest86+ image: /ROOT/pve-1@/boot/memtest86+.bin
Found memtest86+ multiboot image: /ROOT/pve-1@/boot/memtest86+_multiboot.bin
Adding boot menu entry for EFI firmware configuration
done

I don’t know how to check the CLI at boot though.

I haven’t setup the vfio modules yet. Planned on just blacklisting the drivers but we can try it.

Added the drivers to /etc/modules. Ran initramfs -u -k all. Rebooted. No difference.

If I try to add a hardware device to a VM despite the error banner it just throws another error in my face that IOMMU isn’t present.

@Dexter_Kane @COGlory It looks like the problem has been solved and Dexter_Kane was on the right track.

root=ZFS=rpool/ROOT/pve-1 boot=zfs intel_iommu=on

PROXMOX did not like that intel_iommu was on a different line but appending it to the end of the fist line everything started working exactly how it should.

I don’t know when the transition occurred or what caused it but for both of my Intel servers what used to do the job (editing /etc/default/grub) no longer does it. It is now /etc/kernel/cmdline.

I’ll be doing some pass-through testing to really make sure that it’s working but otherwise things are looking good now.

3 Likes

For future internet searches:

In my reading, /etc/default/grub is used when GRUB is used as the bootloader, and /etc/kernel/cmdline is used when systemd-boot is used as the bootloader. In your case, since you were using systemd-boot, using update-grub wouldn’t do anything. It’s required to use proxmox-boot-tool refresh to cover both contingencies.

1 Like

I did discover this explanation on my own several months ago. I found it was at least partially tied to Legacy vs UEFI and possibly CSM being enabled or disabled. Whatever the system decided to use during installation.

By enabling UEFI and disabling both Legacy & CSM during installation I was able to get Proxmox to use GRUB instead of systemd but it’s not out of the question that these might be hot-swappable if you know the right commands as the boot partition(s) can be rebuilt on a ZFS disk after replacement. You just need to build the right partitions depending on if you used Legacy or UEFI.

Knowing this you can probably swap systemd for GRUB or vise versa without re-installing but that is something I have never tested but yes to enable IOMMU on systemd uses a different configuration file in a different directory and uses a different command to push that change on next boot.