[Build Log] Silent Night - my own take on quiet and power efficient NAS

Hello everyone! I’ve been meaning to build something like this for ages, but only now I’ve had the chance to find parts I deem worth of the job and had the money to do so.
Since this NAS is gonna live in my very small apartment it needs to be quiet and sip power as much as possible. So going off of these two assumptions my search begun!

What I had to find first was a good motherboard + CPU combo that would give me good performance per watts for a NAS application, with some containers sprinkled on for good measure. The AsRock N100 series fit the bill, but I couldn’t use the DC version else I would’ve had no “clean” way to power more than two drives. And also there was no expansion beyond the single PCIe slot on it, which it can be limiting in the future when I’m gonna add a 2.5Gbit LAN (unless I adapted the wifi slot to a LAN connection). So I went for the mATX version, cool. Paired it with a stick of Crucial 32GB 3200MHz to give ZFS and my services some breathing room.

The second piece to this is a power efficient PSU that’s efficient especially when the system is idling, because a NAS spends most of it’s time idling. Pico PSUs immediately came to mind. But I went back and forth with the idea of getting an 80 Plus Platinum unit or a Corsair RM550x 2021, which is the holy grail of low load power efficiency in an ATX format. But I ended up not going that route to not increase the size of the machine too much and because I couldn’t find a Corsair RM550x 2021 anywhere!
My end choice is a MiniBox TX-160 with their 4 pin 192W external brick. Added to it a second molex + SATA power connector to avoid having to power all the drives off of one connector on the PSU.

Than storage: SATA SSDs where the only possible choice for a combination of low power consumption and noise. I only had to wait black friday deals on Amazon to buy the Samsung 870 Evo 4TB I’ve been eyeing for a while. Amazon decided to not list them for a while and brought them out heavly discounted during those days. They should use around 600mA each, as reported by reviews, so it’s not gonna max out the 8A on the 5V the PSU is capable of.

Case was another big point of contention for me. I wanted something that’s similar in liters to a pre built NAS. I decided to gamble on a Silverstone SST-ML05B eyeballing the space inside and hoping the motherboard was gonna fit. Since I wasn’t gonna use an SFX PSU I wasn’t too concerned about it and when I’m gonna add a 2.5Gbit LAN I will make a provision for it in the blank space left by the unpopulated PSU slot.

The last piece to make sure this system wasn’t gonna chug trough watts at idle was choosing the proper SATA controller to hook all the drives to. I scoured all the Unraid forums and watched lots of videos to find out that all the JM58x controllers found on cheap adapters have terrible firmware implementations and do not support ASPM. So I decided to go for the ASM1166 that’s confirmed to support proper ASPM. BUT it has to have an updated firmware to do so. So I went for a Silverstone card that offers the updated firmware, the ECS06.

With all the basis covered I had to cover the miscellaneous so I bought a couple 80mm Noctua Redux fans that are super quiet even at full load and should be enough to move a decent amout of air inside that tiny case, molex to 2xSATA splitters to power the 4 drives that the PSU couldn’t power (I had bought Silverstone adapters but where too bulky and stiff I sent them back), reuse a Sabrent Rocket 3.0 512GB as a boot drive and bought low profile Cabledeconn 6 SATA data cables.

Now let’s start with the build:



First times checking how the board fits and how much space I can work with.

Decided to forgo the front panel connector to save some space and hassle. I’m not ever gonna use it so it’s now still wrapped up in the case box.


First couple of test boot outside the case were successfull! The system takes the 32GB of single DDR4 RAM with no issues and loads the JEDEC 3200MHz CL22.



More cable management testing and figuring out where all the cables should go and how I should mount everything inside the case. I had tried to route cables in many ways to try and find the best paths and how they would realistically fit inside the case. Power and reset cables are routed under the motherboard. That made me figure out that the molex to SATA splitters messed up my idea to shove all the drives on the left side of the case. So I ended up mounting them to a bracket that can hold a couple of them and hooked them up to the motherboard. This also ensured that theoretically I’ll get the full performance of the array since the controller is PCIe 3.0 x2 and tops up to 2GB/s (4 drives) and the two SATA ports on board support 1GB/s. Math wise, it cheks out.


Basking in the opulence of 24TB of flash storage. It may not be much for some, but it’s A LOT for me!



Final runs are decided. This was just a couple drives on that bracket and one in the drive cage on the left (which came with the case but wasn’t meant to be there. I rotated it and it snuggly fit there. A piece of double sided tape keeps it firmly in place now).



Final build has been assembled, this is how it’s gonna (hopefully) boot and will get TrueNAS Scale installed on it.
Everything is in it’s place, cables are all out the way and seems like it’s gonna work.

I’m gonna eventually go for a couple upgrades:

  1. 3D printed SSD cage that fits better and closes the SFX PSU gap.
  2. home made “back plane” to hopefully get rid of all the splitters and cables running around to power the drives.

Will document the software installation further in the thread. Thanks for following through and reading through this! This wouldn’t have been possible without the contribution of the community that always came to my help answering all my questions in a knowledgeable way.

12 Likes

Installation and first setup was successfull:

The system idles at C10 state, so the best possible. This is Amazing and never a given when putting together systems like these together from scratch! All the research I’ve done paid off.

powertop --auto-tune worked flawlessly and tuned correctly all the devices on the system. I couldn’t be happier!

I’ll try to retrieve power consumption in watts as soon as possible.

2 Likes

I like the case (I use a SilverStone case myself, albeit with a larger form factor to get 8x HDD running. CS381). Screw holes are a bit cheap on my case. Did Silverstone fix this?

Very nice SFF, low power and SATA SSD build. Small, cool, silent. Modern.

:+1:

I have a 15€ watt meter for the wall. Helped me with all kinds of appliances and powered devices in my home. Good investment and gives you some (sometimes shocking) numbers. Certainly not scientifically accurate, but it’s doing what I want.

N100 is really on the edge…but more horsepower and lanes than a RPi. Should make for a great storage appliance. I think you have other stuff for VMs and CPU intensive stuff. But LZ4 compression is basically free, so don’t disable it. SATA SSDs aren’t cheap per TB. Getting at least another +30% is worth a drive or more.

And when it comes to Flash and power consumption. SATA SSDs are really good. As soon as you check on U.2, low power is basically gone.

24TB of flash is a lot. That’s more than most people have. And expensive. And 870 Evo is solid. Deciding not to cheapskate for QLC or no name brands will serve you well in the coming years.

edit:

With that low power, I don’t think Platinum would do you any good. Only Titanium has 90% on 20% load. And getting 20% load on a 650W PSU is way too much for your build. And figures for 10% are usually way worse (if mentioned in the data sheet at all). Consumer PSUs just aren’t built for ~50W. For 100-200W, (expensive) Titanium PSUs are really good.

I’m going for a 100-200W server build soon and I found the BeQuiet! Dark Power 13 750W Titanium PSU to be the perfect match. But utility certainly breaks down for builds <100W.

1 Like

Fit and finish is still a bit wonky and some holes are a bit off. But it’s not too bad. What I still hate about their cases is the non magnetic screws!

Yeah, I should really buy one to measure it and other things around the house. Now that I think of it I forgot to check if ASPM is enabled for all the devices, but I think it is.

It might sound weird but it’s double the power of a Pi5 single and multicore. I wanted to make something out of an ARM machine, but the jank level was too high for me on all levels, hardware and software.

Yeah, that’s covered and, most importantly, not always on because “mission critial” jobs need my supervision and input.

I know, but I couldn’t fit a bigger and noisier machine unfortunately. Also 14.4TB of usable space is enough for now and the near future. Hopefully by the time I have more storage needs I’d be living in a larger apartment.

I think they’re as good as it gets. U.2 needs PCIe and it’s a bit less efficient bus to work with so it uses a bit more power at idle.

To me this is a 10 year life span array, that’s why I splurged. Also made sure to catch these drives on lowest price off Amazon during the first day of their Black Friday sale. They were 1200$ all in. Expensive, yes but reliable and known good.

Now that I’m seeing what this thing is capable of I’m glad I chose a Pico PSU, even if it made me doubt it could power 6 SSDs (unwarranted scare, but that’s the anxious part of me thinking).

Is it gonna be build log worthy? I’ll be following along if so!

Thanks for having red through all my rambles and thoughts!

This! And the amount of screws…it’s fine and works if you build and forget about it. But I tinkered and changed PCIe cards in my CS381. And the dust filter needs like 12 screws to service (incl. top cover).
But Silverstone got the nice small form factor cases you can fit everywhere at home. I can still recommend their stuff. And they have a good range of accessories for their products. Good one stop shop. I like when a company has a range of products that complement each other.

That’s the best you can hope for. Buying and building it and then put it in some corner, doing it’s job for years to come. Maybe there will defect drives, PSU or fans…maybe you upgrade the board+CPU in a couple of years (least expensive parts). And as long as HDDs holding the €/TB crown, there will be SATA. And SATA SSDs.

Data sheets all state 5W idle. They’re built for different use cases. SATA SSD power states are far more low power than enterprise gear. And not only do you have to account for 20W per drive, you need a platform to supply all the lanes. Means you cap out at 5-6x NVMe at maximum for consumer boards. SATA is ubiquitous and way more scalable. Cost for upgrading the platform from SATA SSD to NVMe is expensive. And if 10Gbit fits the bill, you quickly end up in overkill scenarios.
Yes SATA SSDs aren’t that much cheaper than NVMe. But you can use a N100 for SATA and need EPYC for NVMe (or live with 4-6 NVMe max on consumer board which is still miles away from N100 when talking cost+power bill+form factor).

A CPU isn’t just about clocks speeds and power draw, it’s also about what connectivity and IO you get. And with a RPi you can get a SATA hat and get 4xHDDs. But that’s about it. RPi 5 has “more” PCIe, but it’s still a limiting factor. Expansion/lanes cost power and die space. Just ask AMD about their dedicated IO die.
And without ARM, you have the advantage of not having to compile everything on a bottom tier CPU :slight_smile:
ODroid or N100/embedded is the logical step if you need more connectivity.

As soon as AsRock ships my “waifu-board”, I’ll certainly post a build log for a German-friendly (40ct/KWh) EPYC home server. I’m going Ceph, NVMe, rackmount and low-power (for a fully-fledged server platform). Maximizing Perf/Watt while delivering a performant hypervisor+ high availability storage. That’s the plan.

It won’t be suitable for my living room, unlike yours (does it really need fans? :slight_smile: ). But that’s what homelab is all about…variety, different and individual paths, and also some nerdy and irrational aspects because we like it that way.
We could just buy an AsusStor with x1 NVMe off the shelf, but we don’t want to. We need some extra ooomph, something we planned and with what we got our hands dirty. And we run the software we want, because we now have options. To borrow from the US marine corps: " This is my server. There are many like it, but this one is mine."

“Silent Night” is more 50W than it is a 150W server. Although the efficiency of Pico PSUs aren’t world record breaking, I’d choose the Pico PSU for your build just like you. 100€+ difference on 1-3W delta just doesn’t make sense.
For my power bill, +10W running 24/7 is equivalent of 35€/year. So if a PSU is 100€ more expensive but saves 10W, your return of investment is 3-4 years. Pretty good for a PSU with 10 years of warranty. And high efficiency → less waste heat → less cooling need → more silent.

1 Like

I will too, with reserve.

Exactly. It’s more effective for my use platform wise.

Agree, but CPU performance are important when it comes to x86 because it’s pretty easy to get a cheap low power CPU that struggles. Meanwhile a similar crunching number power for an ARM system works way better. The RISC architecture makes it more efficient in every way.

I’m pretty sure it’s gonna be a great machine!

I bought two Noctua Redux 80mm fans to get some breeze over the CPU. It can’t sustain loads going totally passive because of the integrated cooler. With some air going on it can pull 35W TDP all day and not throttle.

The tinkerer mentality is always there, for good and bad.

Not but they almost reach Titanium level of efficiency on a way cheaper price. Also, in my case, was needed because I left no space unused in the ITX case. The mATX board, even if it’s smaller than a typical mATX one robbed a lot of space.

Absolutely, the saving in power is gonna pay back pretty quickly.

1 Like