Low power homelab/NAS server

I’m looking to build a new server for running Windows as the base OS and various VMs. Requirements are:

  • ECC RAM
  • At least 6 NVMe slots
  • SAS HBA
  • 2.5Gb ethernet or better
  • Low idle power
  • Free PCIe slot for future expansion, at least x4
  • Micro ATX

I’m fine with used parts.

It’s difficult to find anything suitable at a reasonable price, because I need so many PCIe lanes. It’s even harder when I need a Micro ATX motherboard, but I could go to ATX if it’s really the only option. I want an mATX case really.

Early Threadrippers seem to have very high idle power consumption. I was thinking of maybe getting a Xeon but I’m a bit out of touch with them.

Is there a website where you can search for CPUs by PCIe lane count?

Any suggestions?

Do you need the full PCIe 4.0 x4 speed for every NVMe? If not, have you considered a tri-mode HBA like a 9500-8i or 9600-16i? That would cut down on the PCIe lanes a lot, together with the other SAS HBA you would only need PCIe 4.0 x16 (in a x8x8 config), which is available as mATX boards. These often have an M.2 slot which you could then use for your future use

3 Likes

Interesting, I had not considered that. I didn’t even know such things existed. That would certainly solve a lot of problems for me. I don’t need full NVMe performance, so no issue there.

Only issue I can see is that the cases I am looking at all have 3.5" bays, not 5.25". So there aren’t many options for SlimSAS to NVMe adapters that will fit.

Here we go again. Also:

Eeeewwwwww :face_vomiting:

I can’t say there’s many options here with ECC. You might be able to get one of the core ultras with a w680 mobo, which will support ECC. I’m not well versed in that, but like always, one must raise the question: do you need ECC?

With the Asrock Rack W680D4U-2L2T and some supported CPU, you should get your ECC and 24 PCI-E lanes (16 from the x16 slot, 8 from the x8 slot and 4 from the oculink x4 slot in the CPU), with the rest of the lanes being through the PCH (x1 pci-e and 2 * x4 oculink).

If you can find an m.2 riser card from x16 to 8 * x2 m.2 slots, you’re covered for the NVME and have some space for the SAS HBA and at least the x4 lanes from the oculink connector (although for 6 * x2 nvme drives, you’ll be left with 2 * x2 m.2 slots on the card if you need more nvme). Caveat: idk if the motherboard supports bifurcation into 8 * x2 lanes (it’s more likely either x8 x8 or 4 * x4).

IDK, I’m starting to doubt there’s any mATX that fits your needs. Besides, the asrock rack with its on-board ipmi and the w680 chipset will be using quite a bit more juice at idle than a consumer platform.

Well, me commenting ITT was pointless, sorry.
:man_shrugging:

2 Likes

I have looked at ASRockRack but they aren’t cheap… Maybe an older one used would be suitable though.

I have studied the tri-mode HBAs. It looks like I would need a 9600-16i to support both NVMe and SAS/SATA, but even that is a bit limited. I found an interesting post about it: Broadcom 9500-8i, NVME U.2/U.3, Tri-Mode | ServeTheHome Forums

As concluded, it may be better to just have a PCIe bifurcation to NVMe board, for the sake of the lanes you end up using. In theory the 16i card would offer enough connectivity with a single slot, but it sounds like compatibility could be an issue, and the cabling is expensive and I’m not sure exactly what I need.

So I’m thinking I could go for an ASRock B850M motherboard. They support ECC RAM, x4x4x4x4 bifurcation, and I find ASRock products are usually pretty solid. They have:

1 - PCIe 4.0 x16 slot (NVMe drives)
2 - PCIe 4.0 x1 slot, open (future)
3 - PCIe 4.0 x16 slot, x4 electrically (HBA)

I can live with the reduced performance of the HBA, I won’t push it hard. The x1 slot could take 10g ethernet one day. Mobo has 2.5g on-board.

The alternative is to get an older B650M board. One more NVMe socket, no PCIe x1 slot. Realistically I can’t see me wanting more than 2.5g ethernet in the lifetime of this system, but there may be other cards I want in there.

The AsRock Rack Romed6u-2l2t would fit if you can find one.
It doesn’t have SAS HBA controller onboard but you can run many SATA drives of this board without need of a HBA (probably more than any mATX case will hold). So if you don’t plan to actually use SAS drives it shouldn’t matter.

1 Like

There are several things to unpack.

Yes, the HBAs can reduce the requirement for mobo pcie lanes. There are other options, too (e.g. cards with PCIe switches).

So, both the SAS HBAs and the PCIe switch based cards are notorious for not supporting power management. Double check to keep this in mind.

Thanks gysi. I will keep an eye out for one.

At the moment I’m only using SATA, but in future I may add a SAS tape drive, so would like the option to add an HBA. The alternative is if I can find a USB to SAS enclosure for tape drives at a reasonable price, but they are rare to say the least.

Thank you @jode . It’s getting a bit much though, if you have U.2 connections only, that means you need a U.2 to M.2 adapter, or enterprise U.2 SSDs.

You are right about the power consumption, even the best ones aren’t particularly good. I only really need SAS for a future tape drive though. Maybe I’ll get lucky and find a USB SAS enclosure.

You didn’t specify the physical format you’re interested in, only mentioned that you’re looking at all have 3.5" bays. There are equivalent adapters with up to 8x m.2 slots connected via a PCIe switch.

Are you sure you need that? Obviously, only you know.
I cannot see a reasonable application for tape drives in 2025, but there are other opinions.

Do you have a link?

As for tape, what else is good for long term archival storage of terabytes of data?

I plan on maintaining multiple zfs pools (prod + 321-backups, regular scrubs). So far, so good.

Tapes, work (obviously), but dr testing is often not done or rather a testing of patience.

1 Like

Thanks. £1000 though…

I would prefer offline tapes to ZFS pools. Probably cheaper in the long run, can be taken off-site or put in a fireproof box, immune to lightning etc. They just aren’t cheap, but archival Bluray is too low capacity and is getting expensive now that production is ending.

I’ve seen ~$200 chinesium ones, but couldn’t find one quickly. Doesn’t matter anyhow if you’re going the tape route.

Well I would like one for NVMe drives possibly.

We are at that awkward point in time where you still want mechanical drives for bulk storage, but SSDs are rapidly falling in price and much better for active work drives. At least if money is a factor in your decision…

What I’m saying is I’m hedging my bets for a server that should last 10 years or more.

SSD storage is the future for sure, and the future is almost here… But HDDs still have the better price-per-TB, although for a full server it is a price difference of 25%-50% nowadays.

For low power homelab, maybe the QNAP TBS-h574TX could be an option? No ECC, no extra PCIe slots, but ticks most of your other boxes.

And if you want ECC memory, there is now the 12 bay Asustor Flashstor Gen 2 that seems to be just a beast with a peak power draw of 60W when fully loaded:

The only real issue I see with the Flashstor Gen 2 is that it is limited to m.2 instead of the rapidly-standardizing E1.S form factor. Jury is still out whether that is actually a problem, or not.

That is what you will get on the cheap (below $2k). AM5 server boards go for ~$300+, EPYC boards for ~$500+, EPYC CPUs start at ~$600+, guaranteed ECC support is hard to come by.

1 Like

I don’t think anyone has verified that ECC actually works on ASRock (consumer) boards and your requirements won’t fit into “desktop/consumer/prosumer” hardware.

…another option (keeping the cost down)

Asus ROG Strix B650E-F Gaming WiFi
Intel NIC, PCIe layout isn’t ideal but its doable
8 Layer PCB
199£

Asus ROG STRIX X670E-F
Main difference is that M.2 slot(s) and PCIe slots aren’t shared
299£

ASRock RB4M2 4x M.2 M2 NVMe Quad SSD zu PCI-Express PCIe x16 Adapter Konverter | eBay (Note, PCIe 3.0 but you get the idea)

The only thing you’re missing out on is that extra x4 slot and the fact that your SAS HBA will run at 4x (which is fine) due to PCIe limitations. I mean, you’re likely to be fine with a simple ASM1166 controller which follows AHCI standards, just make sure you get one that uses 2x PCIe.

I would avoid Realtek NIC like the plague but that’s just me and both boards are ATX

You might also want to check up on the driver availability on Windows OS platforms depending on what you’re going to run and HBAs/SAS controllers usually do not like power saving modes at all.

1 Like

@diizzy I have an ASRock B650M Pro RS and it supports ECC. Windows reports it working as expected.

My current server has an ASM1166 based card in it, and I’m happy with it. Before that I had a crappy Highpoint one that failed after a few years. I’ll transfer the ASM1166 one over to the new machine for now, until I need a SAS HBA.

I don’t like Realtek either, but I have one in that ASRock board and it’s been fine. 2.5G as well, although I need to upgrade my switch and server to that before I can really use it.

One other thing to consider is that some of these boards have WiFi support, but the M.2 socket can be used for other things. You can even break them out into PCIe ports. Unfortunately Microsoft killed WiFi AP support in Windows (and I mean really killed it, by introducing it so all the 3rd party apps were abandoned, and then discontinuing it) because it would actually be nice to have a WiFi 7 AP for just the cost of the card.

I really don’t see why you’d even remotely want to do networking let alone AP in Windows or even shoehorn it into a NAS. Just grab a cheap Mediatek Filogic based device and run OpenWrt on it, it’ll save you a ton of time and headaches. As others have pointed out, Windows is probably not the best solution for storage unless you’re deadset on NTFS or ReFS which both comes with their own set of limitations.

I would also avoid going below 8 layer PCBs for reliability and signal integrity concerns in general and I wouldn’t even bother with Realtek on a “server” but it’s all up to you.