Home Server

I’m currently looking into building a home-server. I plan to use it for backups and data storage (i.e. as a NAS), and as a playground for projects such as a smart-home control center.
So it’ll probably be running a few docker containers at a time, a handful of tiny webservers, small databases, an MQTT broker, and an instance of node-red. Sounds like a lot, but there will probably be little to no traffic as it’s just for me.

I wanted to ask what hardware you would recommend for this type of workload. Since I am still finishing my postgraduate degree I don’t want to spend a ton of money, so 500-600 EUR are probably my limit.

Another consideration is power-usage as electricity is relatively expensive where I live at 20cents/kWh. So an idle power-consumption of ~50 watts running 24/7 is not really acceptable for me, which rules out my previous idea of using cheap first or second gen Ryzen chips.

Another idea was to go for the fully decked-out Raspi4 with 4GB of RAM, as that would probably be sufficient for my computing needs and power-consumption would be low, and use the money saved for a dedicated NAS to cover the backup and data-storage portion. I’m not sure if ARM would be too limiting especially when it comes to docker containers, as compatibility isn’t great.

Any ideas, experiences or suggestions would be welcome!

EDIT: I was thinking of sticking with the Ryzen3 route, but undervolting it and/or locking it in lower p-states to prevent it from consuming too much power. I don’t have any experience how much of a difference that makes in power-consumption though or, how strongly it affects performance, so

My suggestion would be a cheaper ryzen setup running either unraid or freenas, probably unraid. You can get a decent deal on an old 2600 and build out an ok system for 350-400.

Do you already have disks?

1 Like

No I don’t have disks, but I don’t have massive amounts of data so a total of 4-6TB would be plenty, so that part shouldn’t get terribly expensive

A good place to start would be to look at what kind of deals there are in the used market for intel stuff. Given AMD cpus dont have any iGPU you’d have to at least pop a GPU in there to get it to boot.

I’ll put together something for a ryzen build. Do you have a spare gpu laying around? it doesnt need to do much.

1 Like

ah, didnt see the ninja edit.

This isnt a bad way to go either.

I dont think this will be necessary, but there is a known pstate bug with linux. Might run into issues with power states. The 3200g is already a low power chip anyway though… even at your 50w 24/7 365 @ .20/kwh, thats 87.6 yuros per annum. 7.30 dollarydoos per month.

50w is probably high for this build and I would expect to see that only under loads.

I dont think undervolting is strictly necessary. You could disable PBO and any turboing to reduce clocks somewhat. Thats where the highest power states really are because the voltage on these chips is not linear.

Heres what I came up with


Could probably get it lower but this offers you a few options for upgrades down the road with more disk bays. You could also just run 2 drives and mirror them. Theres a ton of ways to go about this.

1 Like

There’s a great selection of ARM based single board computers out there now which accept SATA or NVMe drives. The power usage is pennies compared to even the lowest powered desktop.

The Pi is at the very low end of the SBC spectrum.
I really enjoy my Odroid HC1. Plenty of processing for docker containers and a handy heatsink type case that also holds a 2.5" SSD.

1 Like

Another idea was to go for the fully decked-out Raspi4

There’s a great selection of ARM based single board computers

So it’ll probably be running a few docker containers at a time

The only gotcha for ARM (or anything not x86_64, actually) is that a fair number of Docker containers don’t have ARM builds. You can always build them yourself if the Dockerfile is provided, but…

There are also a lot of bad Docker containers. Many Dockerfiles just remote fetch an x86_64 binary.

I like unraid on x86 for a NAS/Homelab solution because I can one-click install docker containers like the degenerate I am.

Some thoughts to explore:

  • how valuable is your data? If it is a single point of failure then having your Nas and your sandbox in one host is a higher risk than separating them. Especially webservers.

  • are you constrained for physical space? If so then a compact Nas unit and RPI / minipc route is an option, otherwise get some cheap used server hardware and save the money for later electricity bills

  • will your containers be accessing data elsewhere on the network, ie, the databases connecting to other hosts. If so you will want to plan your networking setup in advance, and be careful not to create bottlenecks / security holes

At the end of the day your use case may evolve over time so get hardware that gives you flexibility, isn’t too expensive if you break something, works with the rest of your setup and is fun to play with.

It sounds like an Unraid setup would suit you but if that eats 25% of your budget in licensing consider putting your data on a dedicated NAS (freenas or an appliance) then set up your lab on $device or $server and try ESXi, proxmox or just plain Linux to run everything else.

Edit: typos

Yeah that’s the only thing holding me back from that option right now. Had some frustrating experiences when I tried to get a semi-large docker-compose running on ARM, and even though the images all had ARM builds lots just refused to work or behaved unexpectedly

My data isn’t super valuable, mostly it’s just backups, and steamcache. For important things I pack it in an encrypted container, and just upload it to GDrive.

Physical space isn’t too bad of an issue. I could fit a regular sized mid-tower case, but a server blade is too odd-shaped to work for me. The main physical constraint is noise as it would be in my “network-cabinet”, which is next to the bedroom making a used server most likely too loud.

And yes data would be accessed from elsewhere, but mostly in the local network which is already wired with 10Gbit, so performance isn’t an issue, and the network is reasonably segregated with VLANs.

Unraid sounds interesting, but I don’t think I need many of its features. Virtualization would be a nice-to-have and I guess I could set it up with ESXi, but for my needs its not a necessity and the pseudo-virtualization of docker is sufficient.

Unraid sounds interesting, but I don’t think I need many of its features. Virtualization would be a nice-to-have and I guess I could set it up with ESXi, but

Linux has built-in virtualization in the form of KVM. It’s supported on every major distro. Any one of them will be fine for that.

Learn the real admin way. Go free BSD and pick up a cheap second gen ryzen or first gen TR

Here are the differences between unraid and BSD. The first is easy mode and the latter is the master race.


basically everything it does is just stuff linux already does. If you wanted to run a setup similar to that it would just be a matter of some elbow grease to roll your own. The advantage here is ease of use. Its not something you really think about unless you’ve tried it… which they do let you test drive it for a month before committing to buy.

Theres also proxmox, but I havent really tried it.

To me ease of use is not a priority, and neither is virtualization. I just plan to use it for running my small projects. When I do virtualization it’s usually larger scale for my research projects, and in that case I use my university’s server infrastructure :wink:
My main concern is the tradeoff between power-consumption and performance.
SBCs with the Intel Celeron J4105 (https://ark.intel.com/content/www/us/en/ark/products/128989/intel-celeron-j4105-processor-4m-cache-up-to-2-50-ghz.html) are also interesting as they are reasonably priced, have low power consumption and the benefit of x86 compared to the Raspi or Odroid.

Ease of use wasnt my priority either until I got a full time job and started concerning myself with how much my time is worth. If its not then something like Ubuntu server 19.10 with ZFS on root might be worth getting into.

Those x86 sbcs are cool so long as you realize you’re limited in expandability down the road. I think the money you save in the power bill will eventually be spent on more hardware. I started with a 2 bay synology and now my habit has turned into running both unraid and freenas on 2 separate boxes in my closet. My point being a lot of us started small and grew into much bigger systems.

By installing Solaris Unix?


dont feed the troll


You had to interrupt him. Havent you ever read sun tzu

Theres nothing wrong with having @Jaxo appreciate the easy way by going the hard way first haha

I am starting to appreciate that sentiment more and more. I’ve started working part-time and since then was willing to shell out more and more for just not having to deal with stuff haha.

The reason I’m not super concerned with ease-of-use is that I already have experience with configuration automation with Ansible through work, and could probably set that thing up in 2-3 hours just modifying some playbooks I already have lying around anyways.

The synology DS918+ was also something I had on my radar, I think Level1 even did a video about it at some point. The biggest plus there is that it’ll probably “just work”, and even if it’s not powerful enough down the road will remain a more than competent NAS.

1 Like