The Ultimate Home Server - Toward an Amalgamation

6 cores is more than enough for a good chunk of Docker services. I have an old 4690k (4 cores, 4 threads) and it can easily run the 30 services, I use. Most of them are DNS resolvers, databases, etc, though.

If you’re up to the task, you can fool around with K8s with both machines, to handle different tasks.

1 Like

One would hope that, after enough traction, we could just provide a pluggable auth module for the open source projects that are most popular with our community.

https://sandstorm.io/ does SSO among it’s hosted apps, so may be worth a look.

1 Like

On another note I wonder how I can setup my storage. Probably the most important question at the moment. The secondary machine (A10 7860K) will only need a SSD for the OS (480 GB SATA SSD) and the OS drive for my main machine is a 256 GB NVMe SSD with all the other storage drives (excluding backup, that’s external but it is related).

For starters, I have an additional 1 TB NVMe SSD on my computer (via PCIe slot) So I got two additional 4 TB HDDs ordered which means I got 4x4 TB HDDs and 3x2 TB HDDs (one of the 2 TB I might go unused though for now). Giving me a total of 10 TB, not counting the NVMe SSD (I do have an additional 1 TB HDD I could have as backup, which is another extra drive).

I was thinking of multiple possible things but not sure which makes the most sense. One thing is whether to use the 1 TB SSD as a giant cache or use it solely for Virtual Machines that run on the main machine.

Another thing is whether to merge the storage drive into one giant pool as 10 TB or keep the drives separate. Is there a way to merge the storage drives but being able to recover data if one drive fails without the others failing with it or being able to add drives to the pool if I need/want more storage? Just have several questions on how I should handle storage.

How you set up your disk space is up to you. It really does depend on what you want to do with it. But, if you want to merge all your drives (or some of them), you’ll need a volume manager. That can either be ZFS or LVM.

ZFS doesn’t really support adding more drives, after the fact (unless you’re replacing dead drives). Also, the performance overhead might not be worth it, depending on your data.

LVM supports RAID, but you have to set it individually on each logical volume, which may or may not be a good thing.

BTRFS seems to support RAID, but it’s less of a volume manager and more of a file system with datasets (or sub-volumes).

Please, do read up on all of them, since I’ve never used ZFS hands-on and BTRFS never really fit my usecase. I’m just mentioning the solutions I know. I don’t know about BTRFS, but both ZFS and LVM allows you to use faster disks (your NVME SSD) as a cache. In LVM, it’s a cache for a single logical volume. ZFS can use it as a L2 ARC, external intent log or SLOG. I don’t know much about these, so search/ask around.

Y’all need to remember to secure your shit.

4 Likes

Seems like quite an undertaking to take all these measures. Still, wouldn’t hurt to do at least some of them.

I’ve done everything except for the VLAN stuff. Guess its time to wrap my head around that.

1 Like

Is there a huge benefit for me going for a SSD cache if I also have to setup a similar backup system? Because this would mean not only getting another 1 TB SSD (or even a matching one) but also copying the storage system setup on the backup drives too should I merge more drives.

The cache is really only needed on the main storage pool, as it aids the read/write speed. I suppose you could add it to the backup pool, but since it will only really handle writes, it doesn’t make a lot of sense. Most of the time, the backup happens automatically, at night. At that time, an extra 10 min of copy time will make very little difference.

The way I understand it, an SSD cache is useful if you work directly off of the pool. Of course, you can also use the NVMe as a separate volume, for your most important stuff. That’s up to you.

I guess the 1 TB SSD is overkill as a cache, though I don’t have too many places to put it otherwise. Unless I use the 1 TB SSD as the OS (which again, seems like overkill) and the 256 GB SSD as the cache. The VMs were going to be stored in the pool. Or, I could put the VMs (Virtual Machines) on the same OS drive and create backups of those VMs onto the Storage Pool. I could also store a snapshot of the main OS on the pool as well.

Also gonna make a Storage Pool backup as well but first I need to buy a multi-bay Hard Drive Enclosure, but I am not sure if I should get 4 Bays, 5 Bays or 8 Bays though. Currently I would only need 3 Bays but I also want to expand the capacity in the future (which kind of puts ZFS out of the question). I could replace the small 2 TB HDD if need be if I limit myself to 4 Bays.

Edit: Okay, while writing this I decided that I am gonna use the 1 TB SSD for the OS and VMs, use 256 GB SSD as the cache (since this SSD is better quality anyways) and the Storage Pool will have some VM backups. How would I setup a backup solution?

Backups are always good to have. But, I have no idea what the best solution is. The only thing I know is, no reason to back up stuff that is replaceable, if you don’t have a lot of space.

In Wendell’s video that introduced this, there was a focus on making all this easier for normal folks. So, as a person that would love to contribute to this, what are we missing? The tools are largely already out there and for me are pretty easy to set up but I’m not a normal user, so I would appreciate feedback on where to spend my time.

4 Likes

I’ve started going into this from windows, never having used linux.

I’ve done everything in docker, once i got the containers running correctly with docker compose files setting it up in their respective gui was pretty easy. But i could have done it a bit more easy with something like unraid.

If you start with something like unraid or freenas, the next hardest thing is doing a secure setup of all the software. It took me some time to set everything up to run over nginx reverse proxy even with the gui of nginx proxy manager. Then setting up authelia for authentication because having 10 different passwords got annoying real quick.

After that, i had to troubleshoot why the reverse proxy didn’t work with some applications, mostly because of Proxy header stuff, in the case of home assistant it was just a config setting to allow reverse proxies -_-.

It would be nice to have something that can automate the setup, but also tell you what it is doing exactly.

For actual normal folks, you would need an interface that does everything for you like the unraid setup. but then generate a complete setup/config for whatever applications you use and update it automatically. For example if you setup nextcloud and home assistant, they are both automatically run over a reverse proxy and they get automatic subdirectory for the server (IP address/{app}).
Something that would be nice is a backup solution that also tests the backup and sends you an e-mail if the test failed ( so you don’t lose data). I’m not sure something like that already exists though.

In the sense of 1 single repository of your data, it would be cool if there was an elastic search client (like apple spotlight) that is connected to multiple application API’s. so if you search for something, it can give results from plex, nextcloud, calibre or your notes.

1 Like

I’m a nermal… I have very little understanding.

Hit me up after wednesday and i will put in a few hours trying to understand what you tell me to do. Im just hitting the sack now. Its 11pm mountain standard time. Im available 10am to 10pm Thursday for 2 or 3 hours, and can set up something recurring if needed.

This is tricky, because in a server setup there’s a lot of “well it depends”. For example, to add TLS with LetsEncrypt, there’s a variety of ways to do that and it depends a lot on how you want to do it and what trade offs you are willing to make in regard to relying on some third party services (CloudFlare, etc.) in order to run your home server. Really, the only way this would work is to have an opinionated management/orchestrator which would work for most people, but would leave many excluded.

I don’t think more documentation, guides, etc. are the way to solve this problem. I think it needs to be some management software that allows people to spin up NextCloud, HomeAssistant, Plex, etc. without needing to know much about how it’s all working.

Given that, what features would you like to see in such a tool?

1 Like

2 Likes

Setting up singular apps is usually not that big of an issue. Especially setting things up like interapp communication is a little harder. I’ve setup Nginx proxy manager and through the gui it was not that hard to setup, altough i did run into other bugs.

What would have helped me is a script that builds a docker compose setup, makes the correct folders and gives that initial setup. With some config’s pre-set. But still with some insight to what the system is doing (figuring out how docker compose works was a huge boon for me)

To give an example for my home system:

User input:Select folder
User input:Select services
Make docker compose files
Generate needed passwords and pass them in a secrets file
For the docker compose files:
Set IP adresses so there is no conflict
Make sure Nginx is in the same network as the proxy services
Setup nginx with the right proxy header for each service with authelia?
Setup centralized logging (i still haven’t done this :frowning: )
Setup a Simple homer or heimdall page to get to each service.
Or maybe an uptime page?
setup Grafana/Telegraf/Influxdb so it works together

Now i’m writing this out it seems to be one size fits all or incredibly complicated. Hmmm,

EDIT: Now i’m thinking about it, For me it would have been nice if there was a Premade Ubuntu server distribution, with a large set of servies + folders setup. so you only have to start the services you want to use and then you can customize to your liking. (randomizing passwords are important though…)

Hi there! :wave: I’ve followed Level1Techs for a while now, but the video announcing the quest for the ultimate home server inspired me to join this forum. :slight_smile:

I miss the homemade 90s web of yore where ISPs would include a bit of web hosting functionality for customers to FTP up some HTML, CSS and JavaScript they copy-pasted from each other. In this era any web surfer :surfing_man: had a decent chance stumbling upon some web authoring literacy.

Then things took a turn towards centralization with various cloud services because when it comes down to it it is a hassle to run your own services. And it’s hard to compete with free as in free beer.

Here are some talks I feel talk of the spirit of the “old web” and then the shift that happened.

These days I’m pondering:

  • What would it take for my friends to start self-hosting services and what would motivate them to do so?
  • How low can we make the barrier to entry in terms of usability, power usage and maintenance?
  • And finally how can we make it easy for users to start peeking behind the curtain of their systems and start learning about the inner workings of their abstractions?

These are some questions that I’m bringing with me when I try to setup my homelab. I’m just starting out I feel. :sweat_smile: I flashed a router with OpenWRT last week, and went to create my own network cables to hook up some Raspberry PIs, only to discover that I had bought a punch down tool and not a crimp tool (!). :joy_cat:

Looking forward to the crimp tool showing up in the mail this week, and looking forward to the see all the posts that this Discourse topic will inspire.

Happy homelabbing! :sparkles: :surfing_man:

6 Likes

I have a main server that only come on when I hit another server that has only software is the startup of the main server. Ten dollar Raspberry Pi. The main server starts up in about 30 seconds. So 5 watts RP to control a 1200 watts machine! And it is simple!

2 Likes

One other advantage of this is that it forces you to make sure your server can power cycle and come back up properly. “Doesn’t work after reboot” is a not uncommon problem.

It also creates regular opportunities to install updates.

The down side is that heat cycling does stress components a bit. It’s less of an issue with SSDs and other solid state stuff. Then again even with my always-on server I have the HDDs spin down when not in use, so…