I have 6x 6.4TB NVME (U.3/U.2) enterprise disks and I would like to use them in a TrueNAS Scale server as my main NAS. I am contemplating the motherboard and CPU choice. My main goal is to have low power consumption yet good read/write performance. There are a few things in my consideration set:
Use the lower power CPU
Use the ASUS hyper M.2 x16 Card to connect 4x U.2 NVME drives through the M.2->U.2 connection cable (I have tried it on the on-board M.2 connector but never tried it on the ASUS hyper M.2 x16 card but I don’t see any problem doing that
Because of #2) I need at least one PCIEX16 slots that supports x4x4x4x4 burification and one PCIEX8 slots that supports x4x4. (Ideally more PCIEX16 slots for expansion)
I also want to use Mellanox 40Gb Ethernet Network Card for high speed LAN access since this is my main NAS. So this needs another PCIEX8 slot for full speed connection
I am also thinking to add a few small size Optane drives for special VDEV (metadata and slog). So maybe more PCIE slots is better
Needs IPMI
I prefer Supermicro MB. I have a few X10 server motherboards but I am not sure if I can get more power consumption efficiency by using newer generation motherboard (X11/X12/X13)
So it sounds like I need a motherboard that has a few PCIEx16/x8 slots and can offer decent power efficiency. Is 50-100w idle power consumption realistic in my case?
I already have low power consumption CPU like Xeon E5-2630L V4. I also have dual CPU motherboard such as Supermicro X10Dri-T motherboard but I think it won’t help me reach the goal of low idle power consumption.
Well, EPYC isn’t exactly low-power, but maybe the Supermicro H12SSL-i board with a 16 core CPU could fit the bill? It has loads of PCIE and I believe all of the x16 slots can be bifurcated. Another option is the Tyan S8030–it only has 5 x16 slots (all of which can be bifurcated), but it also has 2 built-in SFF-8654 connectors that each support 2x U.2. Otherwise, maybe one of the AsRock Rack AM4/AM5 boards?
Do you really need dedicated x4 lanes for each SSD? An NVME HBA like the Ceacent CNS44PE16 that has a PLX switch might be suffient since aggregate throughput of all disks is x16 PCIE 3.0.
For the 40G NIC, you might be able to get away with x4 PCIE 4.0 if you use ConnectX-5 MCX516A-BDAT.