U.3/U.2 NVME NAS motherboard/cpu recommendation

I have 6x 6.4TB NVME (U.3/U.2) enterprise disks and I would like to use them in a TrueNAS Scale server as my main NAS. I am contemplating the motherboard and CPU choice. My main goal is to have low power consumption yet good read/write performance. There are a few things in my consideration set:

  1. Use the lower power CPU
  2. Use the ASUS hyper M.2 x16 Card to connect 4x U.2 NVME drives through the M.2->U.2 connection cable (I have tried it on the on-board M.2 connector but never tried it on the ASUS hyper M.2 x16 card but I don’t see any problem doing that
  3. Because of #2) I need at least one PCIEX16 slots that supports x4x4x4x4 burification and one PCIEX8 slots that supports x4x4. (Ideally more PCIEX16 slots for expansion)
  4. I also want to use Mellanox 40Gb Ethernet Network Card for high speed LAN access since this is my main NAS. So this needs another PCIEX8 slot for full speed connection
  5. I am also thinking to add a few small size Optane drives for special VDEV (metadata and slog). So maybe more PCIE slots is better
  6. Needs IPMI
  7. I prefer Supermicro MB. I have a few X10 server motherboards but I am not sure if I can get more power consumption efficiency by using newer generation motherboard (X11/X12/X13)

So it sounds like I need a motherboard that has a few PCIEx16/x8 slots and can offer decent power efficiency. Is 50-100w idle power consumption realistic in my case?

I already have low power consumption CPU like Xeon E5-2630L V4. I also have dual CPU motherboard such as Supermicro X10Dri-T motherboard but I think it won’t help me reach the goal of low idle power consumption.

Welcome any totally different suggestion!

Well, EPYC isn’t exactly low-power, but maybe the Supermicro H12SSL-i board with a 16 core CPU could fit the bill? It has loads of PCIE and I believe all of the x16 slots can be bifurcated. Another option is the Tyan S8030–it only has 5 x16 slots (all of which can be bifurcated), but it also has 2 built-in SFF-8654 connectors that each support 2x U.2. Otherwise, maybe one of the AsRock Rack AM4/AM5 boards?

1 Like

Do you really need dedicated x4 lanes for each SSD? An NVME HBA like the Ceacent CNS44PE16 that has a PLX switch might be suffient since aggregate throughput of all disks is x16 PCIE 3.0.

For the 40G NIC, you might be able to get away with x4 PCIE 4.0 if you use ConnectX-5 MCX516A-BDAT.

1 Like

I did a quick search on eBay and it seems to be PCIE 16X?

MCX516A-BDAT is physically x16 so that 2x40G ports can be saturated even with x16 PCIE 3.0.

If your motherboard has x4 PCIE 4.0 electrically (physically x16 or opened ended x4), that’s enough bandwidth for a single 40G port.

Your setup is solid, but 100W+ idle is likely.

Key Tips:

  • Motherboard: Supermicro X12/X13 (e.g., X12SPL-F) for better efficiency + IPMI.
  • CPU: Newer low-power Xeon or EPYC may idle better than E5-2630L V4.
  • Power: BIOS tuning helps, but NVMe + 40Gb NIC draw power (~10-15W).
  • Storage: Optane for metadata/SLOG if needed; else, mirrored SSD.

For lower power, consider Xeon-D or low-TDP EPYC. Let me know your workload! :rocket:

As a point of comparison, my setup below idles at 90W

AMD Ryzen 5750GE
ASrock B550 ITX/AX
DDR4-3200 32GB x2
Intel P4610 7.68TB SSD x8 (40W at idle)
Intel Optane P1600X SSD 118GB
Intel Optane M10 16GB SSD (usb enclosure)
Ceacent CNS44PE16 NVME HBA
Chelsio T6225-CR @ 10G
80mm fan x4

To lower idle power, you’d want to use PCIE components that support ASPM, which will also allow the CPU to enter into lower C-states.

More often than not, enterprise U.2, NIC, HBA won’t support ASPM. Many motherboards also have questionable ASPM implementation in the BIOS.

2 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.