Is VT-d necessary for an HBA passthrough to a VM?

I have an i5 2500K in my current ESXi server, and I would like to virtualize FreeNAS with an HBA passthrough. I'm new at this and still muddling through and was hoping to get some advice before I pull the trigger on an HBA card. Sorry if I posted this in the wrong place, pretty new to the forums and haven't quite learned my way around.

Yes, my understanding is that to pass a physical device throught to a VM you will need VT-D support regardless of the Hyper-Visor.

Some Hyper-Visors support virtual fibre chanel provided the HBA card and drivers also support it. E.g.

1 Like

The platform must have an IOMMU for DMA remapping any PCI Function which is to be assigned for VMDirectPath I/O. The IOMMU’s DMA re-mapping functionality is necessary in order for VMDirectPath I/O to work. DMA transactions sent by the passthrough PCI Function carry guest OS physical addresses which must be translated into host physical addresses by the IOMMU.

So, yes, you need VT-d.
Why do you need passthrough anyway? Is your system CPU-bound?

Well, the reason I wanted to use passthrough was that FreeNAS really needs to have direct access to disks. If I create a FreeNAS installation on a USB drive and then spin up a VM with the disks passed through an HBA, I can just recover the configuration from the original USB drive. I can get access to my data if I do something stupid with ESXi by just booting off the original FreeNAS install. Problem is that my 2500K doesn't support VT-d, so I apparently cannot even pass the disks to the VM. I bought this CPU for cheap before I even knew what I was really doing. It's still great for general server stuff, I just like running ESXi and didn't want to dismantle all my VMs and rebuild them in docker containers within FreeNAS, as I've never done it before.

possible to run esxi or freenas as a privileged container inside one another? (this sounds terrible dunno why i thought of this) wouldnt need vt-d that way

Never tried it, so talking out of my ass here. If that's some production environment you're talking about then just ignore me.
Why not just use pRDM (maybe even over NPIV, if HBA allows)?

Alright, I'm still pretty new to this and you lost me at the first acronym.

RDM is Raw Device Mapping.
It's for ESX 4 , but this feature hasn't changed much anyway. And I don't think it requires VT-d.

NPIV is an N-Port ID Virtualization. That's an HBA feature, allows creating virtual FC WWPNs on adapter level and passing them into virtual machine. Also doesn't require VT-d, buuuut requires a switched FC fabric, so if you have a DAS, it may be not an option.

Well first off, thanks for posting links. Reading those has helped clear up a good bit of the fog around this topic for me. At this point, I'm just going to purchase a raid card instead of being deadset on zfs since ESXi doesn't support software raid. Later down the line I'll know a bit more of what to look for when choosing hardware, and do this the right way from the start. Thanks again!