Incoming rant, autism kicked in. It may be slightly off-topic (I should probably start a blog or something).
When reading that, I immediately thought of this:
I know the pains of self-hosting and I understood them when I got into this stuff. I was lucky enough to not spend too much money on hardware. Heck, I obtained a free server and tons of HDDs from work because they were old and junky (and started failing), had to decommission them. I also bought weak PCs and I found a new purpose for them. But honestly, I didn’t know what to expect of the cost of running all my servers. My power bill doubled to tripled.
But that’s not saying much, because I weren’t using a lot of power to begin with. My most power hungry server has a meager Xeon X3450, cooled by an Intel stock cooler (the older, taller ones).
I still want to be in control of my data and don’t want to spend a lot of money on either hardware, nor electricity. I’m a power-efficiency fanatic, not because I’m a tree-huger (far from it), but because I don’t want to increase my costs of living, I want to save money and avoid debt. To be honest, going with a VPS seems like a saner choice, as long as you secure your data really well. You will still give up some control, but it should be cheaper depending on where you live and save you the time of hardware maintenance, leaving you only to administer the software side. I may only try a VPS just to hide my home services behind another gateway.
But since I can’t create things out of nothing, something has to go and I am willing to spend a little bit of money on low-power consumption devices and run them as servers. Note to self: when faced with the desire for something new and shiny, get the cheapest things you can get away with and do a redundant setup. I spent quite some money on ok-ish hardware (but not exorbitant amounts), but had to invest a lot in upgrades to make them into servers. Now I want to go into the opposite direction: micro-servers.
Modern hardware (anything made in the past 10 years) appears to me to use more power than it should. We don’t have ARM on the desktop for cheap yet (for everything else, there is the Honeycomb LX2K), but we have some alternatives, that being single-board computers and Intel Atom based motherboards. The servers I’m most proud of are my pfSense box and my ex-main PC, both running on ASRock J3455M motherboards. In the future, I will be looking into going with things like these and *Pis and clustering them. From my tests, LXD seems pretty fun and more hardware efficient compared to Proxmox (well, duh, containers vs virtualization, it’s a no-brainer if you can get away with it). Since I don’t need 99.999% availability, I’m ok with things failing as long as I have the data safe, then just have the service pop right back up from the latest snapshot. For that, I will probably need something like Ceph, which I have 0 experience with, to go along with a LXD cluster with configured failover.
In my home lab (soon to be home data center), I currently have only 1 main storage and virtualization server (the above mentioned Xeon), with a few HDDs left to make a 2nd array (probably a raid 10), which I will use for replication of my most important VMs. But as mentioned, in my next home lab build (before this one becomes unusable), I plan to have even lower-powered stuff and only have raid1 at most, but multiple storage devices. I wish there would be some generic 2-bay small NASes like the Zyxel NSA-320 that runs run-of-the-mill Linux (heck, I would be willing to use Ubuntu if it meant it still receives updates, I don’t need to do much on a storage server anyway). I’m thinking to get some Pis and USB storage and do a LXD cluster and SAN this way. I may end up with some Docker containers inside LXD, but I really like the power of LXD (having control over the OS and reconfiguring services on-the-fly).
With low-power SBC, the power consumption to run the storage is almost negligible (except the storage devices themselves), you got pretty much silent operation and no worries about dust clogging up airflow paths. On the computing-side, being low-powered may not be desirable if you wish to have lots of clients, but considering that 90%+ (I pulled this number out of my ass though) of home lab setups don’t have more than a few dozen clients connected to a single service concurrently. Modern SBCs (from RPi 2 onward) should be more than enough to host decentralized services at home, what we need is better storage, since SD cards are just awful. I’m really looking forward to things like Rock Pi 4 or Pine64’s RockPro64 with a m.2 adapters. Just add humongous sized QLC M.2 SSDs (they don’t need to be fast, just faster than SDs) and do a SAN with them, then come with other SBCs or even USFF computers for the computing side and you are done. Most modern SBCs can boot via PXE, but in the absence of that (or just wanting more decentralization, ie no main controllers), you can just slap old 2GB microSD cards on them and they won’t complain. And you get to recycle old stuff (you can almost get them for free nowadays). Alpine Linux runs fantastic on these things.
My current home lab is pretty silent, except for an annoying HP ProCurve 48 port switch. This thing is not only loud, but the fans are whiny / high-pitched, which makes them really noticable. The Xeon is in an old Antec case, but the rest of the PCs are staying in 2U rack-mountable cases in my LackRack. They got 80mm fans and they are as silent as the tower case (I got low RPM fans).
I wanted to say that if you want silence, just get a high airflow case with big fans with low RPM, but I see you already installed Noctua fans in your server. It should be fairly quiet.
The conclusion of this rant is that if anyone is interested in self-hosting stuff, they should look into decentralizing their infrastructure with low-powered computers. A home lab can be an old powerhog server and the lab can be virtualized on 1 device, but when doing self-hosting, you may want some redundancy in place and for 24/7 operations, power efficiency.
I must say that I am really thankful for the work people do on free software. I used to be dependent on 1 CPU architecture and 1 OS, but ever since I got into free software, I really enjoy the freedom of being able to jump between architectures. I wish I could spread the freedom, but the only way I can is spreading knowledge, which is probably why I am obsessed with doing public knowledge-bases / wikis. One of the things I will host alongside my blog will be how-to articles, which I may also add to the L1 forum. But before then, I have to self-host first and note the steps I follow in order to make it easier for others to replicate (if automation is not possible).