Power Efficient 149TB Home Server + Network (64W Idle)

I’ve been focused on power efficiency when it comes to my home server. It’s been the primary reason for using doing a custom build instead of Dell R410s I have laying around, along with the reason I use Unraid. This has been 3 years in the making so here it goes.

The second reason for using Unraid specifically was to allow me to slowly build up a set a disks over time that I could then use a more performant TrueNAS build (should I need to)

--------------Parts--------------
Motherboard: ASUS X570-F
CPU: 5700G
RAM: Corsair 2x16GB 3600MHz
Storage Flash: 1TB NVME M.2, 2TB NVME M.2
Storage Disk: 3x 10TB, 7x 12TB, 2x 14TB
Storage Scratch: 4TB

The UPS provides power to a MikroTik RB4011, ONT, this server, and all the LED lighting in the room. With the lights off the idle power draw is a measly 64-80W. For reference the PoE switch (Netgear - ProSafe GS728TP) that I recently decommissioned ran with and idle draw of 50W with nothing plugged in.

Crazy Talk 01:
Now for the crazy talk, there are no parity disks. Yes I said it. For the array to be written to while having parity protection all drives must be spun up. The drive being written to & parity disk perform a write operation and all other drives perform a read operation. That’s an significant amount of power & wear. Instead I keep off site backups of the important data. (Keep in mind I pay 34c/kWh and these systems are always on). Over the course of a day there aren’t many spin ups as I’ve designed the system to run cache only for the frequently used shares (such as Syncthing, qBittorrent etc).

Crazy Talk 02:
I don’t transcode media. This removes the need for a GPU and its associated power usage. It also allows for much better image quality for client devices. I could spend hours talking about the ins and outs of this but I’ll summarise it as “I’m streaming the data of pre-computed client files which are going to play without issue 99.8% of the time. Instead of transcoding BD-Rips on the fly.” I create and store two versions of said ‘client file’ at different bitrates. Should the larger one fail to play, there’s a fallback for that 0.1%.
Somehow the Plex community has become wrapped up in the ‘Transcode War’. It makes sense until you take a look at it from a programming mindset (big 0 notation). Both the transcode high bitrate & store multiple versions, in terms of storage required you could round out to being O(n). With ‘n’ being the same for both. However when it comes to power on the fly transcoding makes little sense. The energy it would take to transcode once and keep said transcoded file ends up repeated ad infinitum, ad nauseum. Not only that but there are issues, including added latency, decreased quality, and often increased bandwidth requirements. If I was to compare the two using big O in this respect I would give hosting multiple version O(1) and transcoding O(n log n).

Crazy Talk 03:
Performance is a big part of this too. I’m normally away on a work trip and most of my family are spread around the world, so about 90% of the streaming is done through the WAN. This limits it to 200 Mbps (25 MB/s). The idea of having a highly performant ZFS pool for this situation is nonsensical. (Before you crucify my I do run TrueNAS with Z-pools, just not for this). When it comes to starting video playback, skipping thorough the video, and even buffering the video, direct play by far beats transcoding.

I think as communities we’ve become wrapped up in competing with each other rather than actually finding elegant/appropriate solutions to our hurdles. An honourable mention is Dropbox, who improved their performance by removing SSDs and writing to SMR HDDs.
Increasing Magic Pocket write throughput by removing our SSD cache disks - Dropbox.

You might rightly assume that this system must not be that functional. I beg to differ. The main use for this guy is Plex and VPN hosting and here are some usage stats.


I got tired of manually running the whole show as far as file management so I’ve got quite a few containers handling my very stringent media requirements.

I don’t leave these containers running because, they cause unnecessary disk spin-ups while they perform their tasks. Instead I run them at the end of the week while I’m messing with it anyway. There are a million and one other little details as to how I’ve optimised this system (Curve Optimiser all core undervolt of -13) but this rant has gone on long enough :slightly_smiling_face:

Most of the time there’s someone watching so it’s at 80W. For those quiet nights it drops down to 64W.
RB4011 10W, Server 54W

I hope you found this post interesting :+1:

6 Likes

Yeah, it is. I see the thought and work you put into this to make it work. It makes sense, but certainly not in the conventional meaning when talking storage. I see the benefits, because I pay a lot per kWh too (~35c) and power draw and total costs (of ownership) should always be part of the equation.

I just don’t think buying expensive stuff and limiting yourself to shutting down most stuff without actually shutting down the server, is really that useful. Reminds me of only running the server in the afternoon to save power. My parents have an electrical heater to heat their winter garden that cost like 1500€, but they rarely use it to save power. Most expensive heater ever per hour of heating.

And we’re talking about maybe 30W delta with drives running 24/7. That’s +40%, but still is a low operating cost considering the investment you made. And you’re paying with shutdown services, low performance, only partial backup, no redundancy and some strange BD-ripping I don’t understand.

But then I’m using TrueNAS and the UnRAID folks were always a bit wierd and mystical for my taste. Maybe I’m just missing a gene so this would make sense. I call it the “spin-up gene” :wink:

I’m a simple man. When I see two drives, I see double the IOPS and bandwidth. “Opportunity awaits!” so to say.

But running with 200TB raw at 60ish Watts most of the time, which is online storage (although limited availability and performance), is certainly a merit of itself. And for a mainly archival usage, I can see people picking up some ideas.

I really like the build log. Something different.

P.S.: Check on the forums if you switch to TrueNAS, you may want recommendations for 25G NICs with those unleashed disks :slight_smile:

But I think you love and enjoy your system as it is, and that’s the main and most important part of it.

3 Likes

I’m glad you found it interesting :sweat_smile: It’s pretty far from what people normally do.

The qBittorrent container is always running even though it’s off in the picture. I was making changes to files in the directory at the time.

None of the services I shut down were actually useful on a running-24/7 basis. The arrs are media management applications, they rename and query indexers. Once the query is done qBittorrent takes it from there, so it doesn’t matter whether it’s turned on a month from now or always running the file is downloaded all the same. It’s the equivalent of asking Google Chrome not to run in the background.
However they run little tasks including checking for changes in sub directories that negates the effectiveness of disk spin-down.

The power usage before all these changes would only go as low as 144W at best, which comes out to £631 per year, going down to 64W takes that price down to £280 per year. Going from 120W to 64 seems small but it’s at no cost to functionality.

As far as why people pick UnRAID vs striped storage alternatives. If catastrophe were to strike and I lose a disk, (even without parity) I would lose 1/12th of my data (which is replaceable). Given the extremely light working environment, I dare say that’s not happening to any of the disks in the array for at least 5 years. The 4TB write cache is a WD Red from 2015 and I’ve brutalized it with writes to the tune of the the entire array x 1.5 (225 TB) all through random writes. Each one of these array disks has only seen sequential writes up to just over its capacity.

UnRAID users get a bad rep and I’m definitely not helping :upside_down_face: That being said I run TrueNAS as a VM in Proxmox too :wink: I like them both, I don’t think they’re competitors to each other. One is targeted towards max performance, the other max efficiency.

1 Like

I wonder if you could go down to <20 watts with some kind of SBC and a USB3 multibay enclosure. (using mergerfs instead of unraid).

Yeah definitely possible. If you looks at the chipsets that a commonly used in things like QNAP NAS devices, they offer decent performance that that power budget.
MergerFS is a good alternative too if you’re looking to slap some disks together, but you will have to handle a lot of the other variables yourself. Finding errors like trailing whitespaces in file names handling share’s, caching, permissions, and docker apps and images etc. At the benefit of being lightweight.

It is worse idea than it seems. Last I checked 2,5 inch hdd in USB enclosure took 1.1 W from the wall when spun down. Connecting drive directly to SATA can go at 0,5 W or maybe even less (not the same drive, this one is special - glued to it’s enclosure).

I think it’s better to use something like Dell optiplex or HP prodesk than raspberry pi. They are darn efficient. My server is build on top of optiplex 5050 micro. Without any hard drive or USB device it can go as low as 1,6 W from the wall. With single nvme ssd 1,9 W.

Bigger ones (SFF or MT) obviously take more power but not too much. Some guy claimed HP 600 g3 sff with 1 Sata SSD and proxmox with jellyfish, wireguard, home assistant etc. to draw 3W from wall at dle. If that’s right then maybe MT could go as at 5 W. For 20 drives? Don’t know how to pack them but you are certainly in better position for this than with raspberry pi. And you have quite efficient power supply for disks in the box (80 plus platinum).

1 Like

Welcome to the forum @mradalbert!

Kinda agree, but there are better solutions out there if you’re looking for efficiency, but you’ll have to deal with the quirks of the early adopter tax.

My entire homelab right now is powered by a single desktop UPS. It reports 49W out. This is currently powering:

  • RockPro64 router (nothing except a USB wifi dongle attached to it)
  • Zyxel XGS 1210-12 (currently 4 ports running, one of them being the 2.5G one)
  • RockPro64 NAS with 2 Ironwolfs pro and 2 crucial mx500s (using a custom PSU converting 19v from a laptop-like brick, into a 12v rail and a 5v rail using buck step-down converters)
  • Odroid N2+
  • Odroid H3+ (which is my main desktop) and a portable 15.6" monitor (main display)

By far, the display takes the most power, along with the h3+, probably around half the budget, with the other half going to the NAS. Planning to add my HC4 (which I don’t keep always on yet, because I don’t have my HDDs).

When you have irreplaceable data, you’ll be investing in redundancy and most importantly bit-rot protection. If I was to make a video streaming service for which I’d own the dvds / blurays, I’d probably just have them laying on multiple disks and organize everything with symlinks in a single folder. Since the original disks serve as the backup, I wouldn’t need another backup for them. But as someone who’s been affected by bit-rot in the past, partially losing a lot of memories (they were corrupted, but still openable images, while the videos would only play until a part before crashing, or other videos were playing crackling noises).

I’m monitoring my pool status often. My HC4 will be my backup server, which I’m planning to have started on-demand (probably by WoL from the rkpr64 router if possible) to launch backups every now and then. Because I don’t want to deal with zfs deduplication, I’d rather use something that already deals with it better, like restic. I already have backups of my PC in my NAS in both zpools, so as far as losing stuff goes, I’m currently safe there (plan is to have it in a single pool and back it up to the HC4).

With the HC4, I expect my power draw to increase by at least 40W when everything spins up (probably 6 to 10W at idle), but nothing beats powering the thing off. And when I add a bit more containers to my homelab, I expect the power draw to go up by a little, but not a lot (maybe 5 to 10W at most, ARM SBCs are just so dayum power efficient, although not very powerful).

I’m thinking of burning some money and going back to my redundant home network setup, but this time with 3 switches, 2 NASes and a container SBC on each switch. Still undecided about that, but I’m pretty sure by now I want to switch my entire homelab to 12v DC and use USB PD cigarette lighter adapters. I already have a 12V 120W one that can spit 100W USB PD at 20V and the other USB PD voltages. I got a 20v USB PD to 2155 barrel jack (using it on my h3+), but it’s powered from an ac anker brick. I got another one that I’m planning to test with my NAS (haven’t gotten around to it yet, because I need to power it down and make sure the voltages from the step-down buck converters stay the same given the 1v input change, so I don’t fry my data). I would need a 12v and a 15v USB C to 2155 jack adapters and I’d be set.

That’ll get me closer to being able to power my entire homelab via solar panels and a portable power station. If I can get a hotspot that works plugged in with its battery removed, I’d be set, portable computing everywhere (although obviously I could only use ipv6 for remoting back to my network, but that wouldn’t be too bad, we should be moving to ipv6 anyway).