Return to Level1Techs.com

Help with Proxmox and FreeNAS build

I did spend some time reading other posts before starting this, but I couldn’t see what I needed to know. So I am throwing my questions into the fray as well.

My plan was to build a Proxmox-based system to run 2-3 LXC containers, maybe 2-3 Linux servers and at least one Win 10 VM with passthrough.

The idea was to have the OS on two NVMe (RAID 1), the VMs on two SSDs (RAID 1), and then have an HBA card (and drives) that would be passed through to FreeNAS for a storage pool.

I keep reading about motherboards that shut off or degrade PCIe slots or SATA when using NVMe’s.

What motherboard would be good for 2 NVMe’s, 2 SSD, a Radeon 5600XT, an LSI 9211-8i, and maybe even a NIC later on?

I’m not locked into AMD or Intel. I just want to be approaching this in a less dumb than usual manner.

The HBA isn’t mandatory, just thought it might be the way to go.

Please let me know your suggestions! Thank you.

You are going to want a HEDT motherboard and CPU. So that is Threadripper, Epyc, higher end i9s, etc.

Those boards/CPUs have more PCIe lanes than the normal desktop CPUs do. That means that they have enough lanes for multiple different high bandwidth PCIe devices.

If you are using FreeNas (and therefore ZFS) in a VM, it is almost mandatory. ZFS really should have direct access to the disks in the pool if possible, instead of being insulated from the hardware by raid, jbod, disk images, etc.

An HBA or even a RAID card flashed into I.T. Mode would be highly recommend for any array worth adding redundancy to.

Also if you’re going with proxmox, you could let it manage your ZFS array too. I used to have FreeNAS in a VM with an HBA passed through inside of ESXI. But the complication and inflexibility when it came to server maintenance made it kind of painful to manage. I was also never able to get reliable full speed transfers over even gigabit but the same hardware had zero problem on a bare metal install.