Low-powered homeNAS+pfSense+apps+VM

While waiting for Threadripper 5000, I might as well get going on my home network.

I want storage, routing, firewall, NextCloud and possibly running a small VM or two for special services. Basically the all-in-one non-desktop solution in a server that doesn’t consume more than 100W of power (German here, I pay 28ct/kWh for 100% wind+solar) while using 4x HDD and standard components and no mobile or embedded stuff.

My draft so far: SilverStone case CS381, 64GB ECC memory, ~ 8 core desktop CPU (Intel T-series viable? Ryzen ECO mode?), 10GB LAN over copper, spare 1GBit ports (WLAN and modem connection, maybe guest network too), 4x HDD ZFS 2x striped mirror vDevs (+m.2 needed for SLOG/L2ARC?). Not sure how I want to connect my SATA yet.

I’m mainly asking for best practice advice, especially suitable motherboards and power tweaking. The 100W limit should be achieveable with underclocked/undervolted CPU (T-series from Intel any good? AMD ECO mode more that marketing bullshit?)

I checked on Asrock Rack AM4 board availability, but didn’t find any big retailer selling them here. IPMI+on-board 10GBit LAN would be great.

1 Like

See:


Sounds like your use case… (Ctrl+F watts).

2 Likes

Depending on your services and some other factors, 100W might not be achievable, but who knows. To be on the safe side, I’d get a 6 core Ryzen and underclock it, so that in full load of the system, power consumption shouldn’t jump over 100W (so basically make it a 45W CPU + add all the other components). Don’t add a GPU, just use one when you install your hypervisor, then remove it.

After reading the requirements again, I’d say you split the workloads.

1 box with something like an Intel Celeron J variant (dual or quad core, doesn’t really matter, just get one with AES-NI extension) and an Intel quad NIC. Or buy one of those small boxes with ports already on it. If there is not a lot of traffic on the LAN side, you can get just 2 ports and use VLANs. You can get a box to be <15W.

10G copper is on the power hogging side of things. Do you really need 10G? If you do, you’d certainly need better hardware than a poor Atom cores Celeron, but I think an i3 6100 or similar should be fine. But if you don’t really need 10G to all hosts, I’d say you buy a switch that has 2 or 4 10G uplink ports and the rest 1 Gbps ports. That way you can connect your server to the switch and you won’t have a network bottleneck at your server. And with this setup, you won’t have 10G inter-VLAN traffic, so you don’t need a box with anything bigger than a Celeron J.

You can push 1Gbps on the WAN and on 3 LANs with a Celeron J3455, newer models probably are just a little more powerful, but not by much. So keep your money and only connect 10G to your server (if you even need to) and keep the rest of the network gigabit. Or maybe go for a 2.5Gbps switch and get a 2.5Gbps dual or quad NIC for your router too. Again, its on you to decide how much resources do you need, or to tell us everything you are going to do and we will make a setup for you.

Depending on what you do, I would say you skip ZIL and L2ARC, you don’t really need it for home use unless you have massive usage of the storage, or you also do some production-like workloads directly on the storage. I would suggest you buy a separate NAS box. But you can do a VM, like I did and just give it a significant chunk of your storage portion and leave about 25% empty. It’s not good to go over 80% used space on your storage, I left 5% just in case for other VMs and such, maybe allocate even less for your data hoarding holding VM, like 60% of max capacity and leave 20% for the rest.

Are you going to be writing intensively on it? If not, RAID1 should be fine, just get bigger disks. If you really need tons of storage, then RAID10 might be necessary. But if you’re going the real hoarder life, I’d get 6 disks and do RAID-z2. Again, from what you said, I’d say a RAID1 should be sufficient. If you are going to have a home-streaming server, depending on how many devices will access it at once, then maybe you would need a RAID10.

If you combine the NAS and services server in one box, I’d go with a 6 core Ryzen (maybe even a 4 core depending on what you do).

You could go lower, 32GB of RAM, quad core Ryzen (8 threads), do a VM for storage, then use LXC containers for NextCloud and other services that don’t need VMs. You can go the docker route and configure a small VM where you run your services. Then you should have enough CPU and RAM left for 5-6 VMs, depending on the size you want them to be.

It is easy to overbuild a server, but it is ultimately going to be a waste if you have a lot of overhead that you won’t use. You can always upgrade the CPU and RAM if you need to and if your case supports, add more storage down the line.

The most important thing I’d say is to split the routing / firewall box from your server box. You seriously don’t want your internet access to go down when you reboot your server. Routers can have a way bigger uptime than virtualization servers. Plus that you always have to deal with the fact that an update to the hypervisor might break the NIC passthrough to your firewall VM.


Now my rant: I had 3 servers

  • 1 with 10x 2TB drives in RAID-z2, quad-core 8 thread Lynnfield Xeon x3450 and 24GB of RAM (I had 4 sticks of ECC, one wasn’t working, was spewing errors on boot).
  • 1 with 4x 2TB drives in RAID10, Pentium G4560, 8GB of RAM
  • 1 with 1x 512GB SSD, Celeron J3455*, 8GB of RAM.
    + another J3455* with 2GB of RAM as my pfSense box.

I had them all up 24/7. I wasn’t paying a lot for electricity, but they were idling 99.9% of the time. I did spin up some VMs on the Xeon, a remote storage for surveillance footage, a prometheus+grafana VM for monitoring VM storage mostly, my SMB file sharing server (used between my PCs) and some test VMs. Way overkill for what I needed, but they were either free junk hardware (the Xeon), or they were repurposed systems (the Pentium was my main PC, then moved to the Celeron, then moved to a Pi 4, so I added those 2 to Proxmox and made a cluster). To be honest, for my needs, I could have gotten away with just the Pentium and a 6 drive RAID-z2 and keep the others as spares, but no, I had to make my setup overkill (lessons learned).

2 of my servers (the xeon and the celeron) got taken down by a power outage. Thankfully I saved my data on my pentium. My next homelab will be a cluster of low-powered SBCs (mostly because of the power consumption). If I can get them to consume 5-7W most of the time, with the exception of the NAS, then I’ll be happy (I want quite a lot of SBCs, just to make everything redundant).

7 Likes

Have you looked at the Truenas Mini X+ from Ixsystems or a build that mimics that ?
It has all you’re asking for in the power envelope you are talking about (100W)

and truenas scale (the IxSystems NAS/Hypervisor os based on Linux) will be able to handle all your use cases …

I won’t get into the “is running your router in a VM right for you” debate, if you feel like you can handle the need to be always on part, truenas scale (and a switch that supports Vlans) can definitely do it.
I am running a similar setup on an old Microserver Gen8, sans 10gb networking, and it has worked for me for the past couple of years (even with the Freebsd based truenas) …
My system uses an older platform that is definitely less powerfule but less power hungry as well … I am running a Xeon E3-1265L V2 @ 2.50GHz, 16GB of ECC RAM, three WD red 6TB drives, one Sun F60 Nvme for around 60w at idle, and 80-90w under full load …

4 Likes

Moin,

I’d do as @ThatGuyB suggested and deploy 2 severs. That’s what I’ve done and I couldn’t be happier having a dedicated firewall… The great thing is that I have deployed my Opnsense FW with ZFS as the FS, which allows me to do full backups and at least snapshots before updating it. Since I’m on fiber I just needed to create VLAN7 (Telekom) and do PPPoE over that interface. Getting IPv6 set up is also pretty easy.

My main Server is a Ryzen 1600 but it uses 76-82 Watts idle… Really not great. Another problem with some Ryzen is that they tend to crash sometimes when they’re idling along in a low c-state… The newer generations should be fine though, but do yourself a favor and make sure it doesn’t crash!

Give this a watch: Corsair RM550x - 550W Netzteil vs PicoPSU 90 - YouTube

That’s the switch I use MikroTik Routers and Wireless - Products: CSS326-24G-2S+RM It’s a nice switch, but I’ve you have the money do yourself a favor and buy the same as a managed switch, so that you get more options to play around. As it is, it’s pretty basic, but has most what you need (VLANS etc.)

2 Likes

I recall that being an issue with power supplies. Newer PSUs should not crash Ryzen in low c-states.

As much as I also like MikroTik, I’d say you go with any cheap 2nd hand switches that have 10G modules. HP ProCurve 2920 come in 24 and 48 port variants and you can find some with 2 port 10G Ethernet module in the back. Those are usually for connecting switches together, but they can be used on servers. Or if it is cheaper, just get a SFP+ PCI-E card and some modules to connect your server and your switch, you never know what deals you can get on the 2nd hand market. But for $140, the linked MikroTik is not bad, so if you go 2nd hand, you should not go over $100 for both the modules and the switch, otherwise, probably not worth it.

In my case I went a little overboard and use two Microservers that act as backup of each other :slight_smile:
The first NAS hosts my router VM, that is snapshotted and sent to to the other NAS using ZFS every 4 hours,
The second NAS hosts my main network shares, and runs docker onto which I have deployed

  • home automation system (openhab)
  • energy monitoring solution (emoncms)
  • prometheus
  • grafana
  • unifi controller
  • mqtt
  • node-red automations (christmas lights, pool pumps and clorinator, flood alerting system)
  • some other accessory servers
  • plex
  • cloud backup to onedrive

The second NAS also hosts some ISCSI LUNS off of which I boot four raspberries distributed around the house (I hate SD cards), and all of this is snapshotted and sent to the first NAS

What this gives me is a fully local redundant infrastructure that can temporarily run on one single server in an emergency, and still draws less than 120W at idle

… works for me … it is risky (as in running a virtualized firewall on your nas risky) and requires some care not to break stuff, and I am the only one able to ‘fix’ it when it breaks, but hey, it keeps my brain engaged, and doesn’t draw too much power …

1 Like

Lots of good input. Thanks to every one of you. Got to narrow things down by a good margin. About 3.5" drives…any recommendations? Has been a while since I bought drives as I was mostly inheriting scrap (resulting in a museum-like patchwork I want to replace asap).

I was thinking about Toshiba MG 07 or 08 for the price. 14/16T drives are around 250€ around here and with four drives I save a full cold spare drive worth in money over e.g. Ironwolf Pro NAS drives. Everything is mirrored in a zPool. I’m not a hoarder, but I got several TB of data that need redundancy. And I don’t like resilvering process (and possible praying) especially on modern HDD capacities.

CPU I’m probably getting a 5600x and see how low power I can go without limiting performance too much, while still having a performance option if need be later on in it’s lifetime.

I did check the TrueNAS mini series, but I’d rather build that machine myself, from the first screw to the last shell script. But the series looks really nice and price is reasonable considering the alternatives on the market.

And one link from one of you guys worked that linked me to some Asrock board retailer that is in germany. That board is quite a rarity here.

I did check on that video with the very promising RM550x PSU, sadly the case needs a SFX(-L) form factor, so I probably grab some 500W 80+ gold one. Seen two with 90%@20% load that look promising. Power is important, but saving 5W for 60€ extra just isn’t worth it.

3 Likes

Short update on my side…I was finally able to get the board ordered, although delivery is approx within a week according to the vendor. That x570D4U-2L2T is quite a rarity to get in Germany as of this day.

I settled with a 5900x Ryzen instead of the 5600x. I found out that I got some more load to expect and that the 5900x can be throttled down to ~65W. Eager to find out how the onboard BMC/IPMI likes my ideas.

For cooling I went with a 240mm AIO (should fit into the CS381 according to Silverstone stats) and 120mm case fans. That should ramp down general case temperatures and benefit HDDs, SSDs as well as chipset/controller.

4x Toshiba 16T MG08 Drives form the backbone for all my storage in the years to come, with the option to expand to 8x HDD). I’ll monitor the performance closely in the first time and then decide whether a SLOG or L2ARC is all that useful. Got 128Gig total memory (Kingston 3200 ECC) so ARC should get a good hit rate although I doubt I have more than 64Gig spare memory for ARC once everything is running. We’ll see, the M.2 slots don’t go anywhere any time soon.

After considering Proxmox, TrueNAS, unRAID and ESXi, I’m basically sold on Proxmox and already got all the ISOs I need. I don’t want things to be more complicated than it has to be, so no ZFS on unRAID, TrueNAS ZFS pool in a Proxmox VM or other adventures (for now).

Waiting (hopefully) not longer than a week. I’ll get you some pictures of the build once the packets arrive.

edit: Gotta check my basement for a VGA cable. Forgot about that. The HDMI is only available with an APU. Time to check my cable box that should also include some old co-axial LAN cables with BNC connectors (10Base2, those were the days!)

For power efficiency you don’t really want water cooling. a large noctua will do fine and use less energy.

What motherboard did you take? Some brands just use less power than others, which can make quite a difference. Getting a power efficient power supply (one that is actually efficient on low load) is key. A B550 is more efficient than an x570, because the chipset just uses more power.

I have a simple server with 3 harddrives, an intel 6600k on an itx motherboard with 16gb ram and an nvme ssd. It uses about 33W and i don’t even spin down my hard drives.

Most important is idle power use, as most of the cores won’t be loaded. intel is still more efficient here.

2 Likes

Large Noctua won’t fit into the case, only Noctua that was appropriate is the NH-L9a. But considering the bad 120mm fans that come with the case, I had to spend 40€ for two proper fans on top of that. The AIO is actually cheaper all things considered. Pump is a bit more power, sure.

Board is the x570D4U-2L2T. Really a well designed board with server features on AM4. No real alternatives. Not really fond of 10GBe power bill, but 1G storage server doesn’t cut it anymore.

It’s really more a all-in-one home server approach than a DIY NAS and I’m ok with raising the power limit a bit to meet the demands. I realized many more possibilities while putting my order together I’ll still tweak the hell out of components to get low noise and low power usage for a given amount of my use case. But more memory sticks, pump, bigger CPU…they certainly add up.

Got an SFX-L PSU 80+Gold with 500W and 90-93% efficiency at around 100W, so I won’t lose any unnecessary power to a cheap PSU. SFX PSUs are rather limited in offers and getting higher efficiency just wasn’t worth the pricetag.

If you are using IPMI you don’t need a video cable at all, just remote in from a laptop …

2 Likes

I’m always underestimating the capabilities of the BMC/IPMI. Too much brainwashing with desktop components in the past. Really eager to see if all goes well with pulling ISO right through the IPMI KVM.

Got another issue:

I bought two of these: SilverStone CPS05-RE INTRODUCTION

Is this enough to connect my SATA drives to the motherboard? Or do I need an HBA? I’ve never used a backplane before and I’m not sure if I need something more so the board detects the HDDs.

Is the whole hotplug thing the job of the motherboard (BIOS setting, board has 8x SATA ports), or does the backplane and SATA just run hotplug out-of-the box. Don’t want any shorts in my system if that is even a thing that can happen.

Shipping arrived!

The power draw will be lower than I previously planned. The 4x Toshiba MG08 I got were OEM drives without warranty (first thing I always check on manufacturer website). Getting my money back of course.

But for the first week or two the system will run without HDDs. At least I get to build the whole thing and thanks to the hot-swap bays, the proper drives will install just smoothly. Didn’t expect to replace HDDs THAT soon :slight_smile:

Weekend will be glorious! I never built a ship in a bottle, but this will be the closest I’ll ever get to that experience.

2 Likes

Greetings Exard3k,
I am also attempting to build quit an identical setup like yours.
Would love to hear more about your experience about you power consumption and settings you have made to maybe achieve lower idle wattage draw.
Also I feel your pain buying that motherboard, I am also from germany and it was a nightmare to find a reseller, but finally I found one and hopefully it will arrive in 14-16 weeks …
Best regards

1 Like

I’m running at ~75W idle and 125W full load with 128GB memory, 6x HDD,4xSSD and 6 active Ethernet connections (2x10Gbit). 5900x is set to 65W TDP via ECO mode, plenty of horsepower with 12 Ryzen cores.

It’s not super low-power, but considering the amount of stuff I’m running, I feel comfortable running it 24/7. Besides ECO-Mode, I only tweaked some CPU governor things in Linux and disabled all unnecessary stuff in BIOS. I let my HDD spin all the time, so that’s factor when talking power that can’t be mitigated when using ZFS and adds up. SSDs are vastly more energy efficient and can change power states in Ms without me mechanical strain

I bought the board last year when availability was more or less still fine. Chipset passive heatsink needed a 40mm Noctua upgrade und ECC memory throws errors beyond 2667MTs. Otherwise I’m happy with everything and I’m migrating from Proxmox to TrueNAS Scale to merge both Hypervisor and storage server and ease management effort.

Cost me a bit, but will probably run another 8 years or so. Carefree-package for anything Homeserver/lab.

1 Like

Thanks for the fast reply,
those power draw still sound way better then my Threadripper 3000 setup.
Albeit I use a decent of containers in Proxmox and it feels like a bliss to have the upgradebility but sadly still not enough to justify the whole setup and its power draw.
Thank you for your insight. Maybe if the and the one I am also building for a friend are running I will also publish my results.

I’m betting just setting things to power efficient modes in BIOS and disabling boost etc. would make lots of parts consume little enough power.

Indeed and I tried looking into it.
Sadly the Threadripper Pro 3955wx either really doesn’t allow any change of powermanagement settings or I haven’t found any. Well at the time I was building that system I had still other demands which normal Ryzen CPUs couldn’t fulfill, things changed so why shouldn’t the system and maybe in time I will switch back to the Threadripper.
Also one system is for a friend and I couldn’t warmheartedly recommend him to get such high priced items.
I know there would be cheaper options for a MB but it really has a nice featureset which he also wanted and would otherwise had to add them through other means.

I wonder what the return on investment is for a new fancy system to save you the money something older would use.