How do I enable IOMMU on a Supermicro X10DRi-T4+ motherboard?

I have a Supermicro X10DRi-T4+ motherboard with 2x Intel Xeon E5-2690 v4 processors.

It has the latest BIOS already. Intel VT-x AND Intel VT-d are enabled in the BIOS, but when I probe the system in Proxmox, it doesn’t show that it is actually enabled and/or operational.






Any help would be greatly appreciated.

Thank you.

2 Likes

@alpha754293 did you ever figure it out?

I have it working now.

Contrary to:

For Intel CPUs, you may also need to enable the IOMMU on the kernel command line for older (pre-5.15) kernels

I did need to set intel_iommu=on despite PVE8 running kernel 6.2

1 Like

There’s a real distinct possibility that when I posted this question, I was using ZFS root in Proxmox VE 7.3-3.

(Which since then, I’ve upgraded to 7.4-3, but I don’t think that it’s the version of Proxmox that’s the issue, it’s the ZFS root, that’s the issue.)

If you follow the instructions for GPU passthrough, it will tell you to add the kernel command line arguments in /etc/default/grub and then run update-grub.

If you DON’T have ZFS root, then this works.

If you DO have ZFS root, then this doesn’t work.

I never did figure out how to get this to work with ZFS root in Proxmox.

I ended up just reformatting the Proxmox OS drive from using ZFS raidz2 to using HW RAID6 (via MegaRAID 12 Gbps SAS 9361-8i) and installed Proxmox “normally”, and everything else worked.

(My test systems were both only using single drives for the OS, and so, I didn’t discover this issue with ZFS root until I was deploying my main Proxmox server where I threw 4x HGST 1TB SATA 3 Gbps HDDs as my OS drive.)

1 Like

For ZFS root in proxmox you need to edit /etc/kernel/cmdline and add intel_iommu=on

I don’t remember where I found that to give credit but I put it in my notes on how to set up proxmox.

1 Like

This is what I found independently to work as well.

1 Like

This is the GRUB command line that I have to edit/put in, to make it work:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream nofb nomodeset initcall_blacklist=sysfb_init video=vesafbff,efifbff vfio-pci.ids=10de:2531,10de:228e disable_vga=1"

Can I just “dump” the exact same thing to /etc/kernel/cmdline to get the same thing to work?

Or is the syntax different with ZFS root?

Your help is greatly appreciated.

Thank you.

2 Likes

I only have:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

I have all of these enabled.

Thanks.

I have no idea what the rest of those BIOS options do, that’s why I don’t “fix” what isn’t “broken”.

As I mentioned, I ended up ditching the ZFS root and went with a 9361-8i managed HW RAID6 array for the OS/boot drive, and the instructions for GPU passthrough in Proxmox VE 7.3-3 worked without any hitches, so I stuck with what worked instead.

Thanks.

1 Like

Since the whole section is labelled VT-d I figured I’d try them all on.

1 Like

Yeah - it turns out that there is supposedly/apparently a different way to update grub under ZFS root than it is if you weren’t running ZFS root.

I don’t remember the details of it, nor did I bother testing it, so I can’t cite the specifics anymore.

The other element or aspect to this as well is that ZFS root just added another layer of complexity and complication to this whole thing.

Now, if you don’t have HW RAID, and you still wanted the protections that “RAID” offers (via ZFS mirror, for example), then you might be a little bit SOL on that front if ZFS root is your primary and/or only option to be able to get that kind of an OS/boot drive redundancy/fault protection against a failure of said OS/boot drive.

(YouTuber ByteMy Bits, from the sounds of it, recently experienced this where his Proxmox OS boot drive died due to the absence of said such fault protections.)

For me, using a HW RAID HBA was easier (and since I needed it for the backplane anyways in order to be able to control/manage and address 36 drives, it worked for me). And it also helps that it came with the system when I bought it, so I didn’t have to do anything extra neither.

In any case, I think that there’s a way to pass the IOMMU to grub on ZFS root, but it’s not as straightforward as instructions via the Proxmox forums would have it, so that takes additional time and research to be able to morph those instructions (without ZFS root) and to combine that with the instructions with/for ZFS root.

Make your life easier and get a HW RAID HBA and just run the IOMMU the “normal” way.

(I miss the X79 days where there were 44 PCIe lanes that you can play with.)

Idk, for iommu with zfs root on Proxmox 8, I followed the documentation and the only thing that was misleading was whether or not to add intel_iommu=on to grub (see below):

Other than that it was just about the bios configuration.

I’ve always installed Proxmox on zfs root, but if the installer doesn’t give you the option for mdraid or lvm mirror, you could always go through a normal debian installation and set those up there and then setup Proxmox on top of it.

Yeah, I haven’t migrated to Proxmox 8 yet. Currently, there are no plans to do so (ad simile to the “don’t break it” mantra that I have with pretty much ALL things Linux).

I started this thread at the beginning of this year, when Proxmox was still only version 7.3-3.

As I mentioned – tried it at the beginning of this thread and GPU passthrough didn’t work, so… c’est la vie?

I don’t remember if the installer gave more options than ZFS root or just a single drive. Maybe in the graphical installer, but I also don’t remember if there was a “More options…” button neither.

If you manage to test and get GPU passthrough working with ZFS root, that can be useful information, I think, for the community.

But for Proxmox 7.3-3 – it didn’t work for me or specifically, there were additional things that you needed to do (potentially) to get it working with said ZFS root, which I didn’t even stumble across until much later.

But by then, the system was already deployed into production (needed it up and running quickly when I was going through my mass migration project), so I didn’t have a lot of “down time” to try and get it up and running.

It is working and is documented here in this thread.

That’s not EXACTLY, entirely true.

The addition of the rest of kernel cmdline options (either to grub and/or to /etc/kernel/cmdline) – it’s not exactly speciicaly clear as to whether I would need to, for example, supply the vfio-pci.ids (to /etc/kernel/cmdline`) for it to work.

And now that my server is in PROD, I can’t use it as a test system and I dont have another system that has multiple hard drives to test with anymore.

(cf. [SOLVED] - GPU Passthrough with RTX 3090 Doesn't work & DMAR Errors | Proxmox Support Forum)

I didn’t.

Granted, I have still not put it into use. I added an AMD GPU to a Debian VM and confirmed that the VM saw it.

Additionally, I also plan to move away from ZFS on root so that I can bootstrap the Proxmox hosts with PXE, so as you’ve pointed out, the requirements may change.

1 Like

It is my assumption then, given this statement, that it is either more difficult or impossible to bootstrap your PXE hosts with ZFS on root over said PXE?

Ultimately, I think that ZFS on root is good if you don’t have some kind of hardware RAID option (be it onboard or via a HBA), but if you have that option, it is actually quite a lot easier to just set up the raid on the HW RAID HBA and then install Proxmox “normally” than trying to have to deal with ZFS breaking on your Proxmox boot drive, which will then take down the entire system. (which in my case, because of my mass consolidation effort earlier this year, would be absolutely terrible/horrible)

It’s not possible to bootstrap Proxmox with PXE at all. You have to install Debian via PXE and then convert it to Proxmox. Debian has no ZFS on root installation option. Maybe it’s possible to set it up but as it is not officially supported, via con dios. I have not done this yet, but mdraid/lvm root should be possible to configure in Debian via PXE and preseed so that’s the route I’ll be taking.

I agree with this only in the case of UEFI booting where replicating the FAT32 partition gets complicated for software RAID setups. For legacy BIOS booting, setting up software RAID through a guided installer is relatively straightforward.

My issue with software RAID is that if the software dies, then trying to recover from that takes more research and effort vs. the documentation that, for example, LSI already offers on how to recover from RAID/logical drive group failures on their controllers (probably because it’s happened to them a decade or two ago, so they already have the recovery procedure documented).

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.