HomeLab OS

Hey everyone,

I would like for this to become a megathread, but we’ll see. I am just curious about what setups others in the community have for their HomeLabs. I’ve felt like I do my HomeLab the hard way in many aspects compared to what I see from others. I see a bunch of people who seem to use platforms like VMWare ESXI or Proxmox to host many different OSes on their HomeLabs. In this regard, my approach seems more manual as a single Linux host that runs the applications behind my PFSense firewall. Originally, I started self-hosting with a Linode instance; that was in 2019; and I started down this rabbit hole by installing Nextcloud the hard way. This all started as a desire to move away from Big Tech dependence, as is likely the case for most HomeLabbers out there. I found Nextcloud to not be a very good platform - it tries to do too much. The POSIX philosophy comes to mind; I feel that if Nextcloud didn’t try to be everything at once then maybe it would be good at something and not so buggy. Eventually, as I went about replacing Nextcloud, I found Docker.

Then, when I decided to move, I prioritized ISP selection in my apartment search; and GFiber enabled me to move from cloud-hosting to physical infrastructure. Inspired by Wendel’s HAProxy-WI video, all my hosting is done on my hardware with a Linode nanode simply running an Nginx reverse-proxy over a Wireguard tunnel to my physical machine. This setup is nice because I only have to open up a single port for Wireguard on my home network. It honestly resembles this video from Tom Lawrence most.

My HomeLab started out with Debian, then I moved over to Alma Linux, and finally to RHEL (with a developer license). The move away from Debian also saw me move towards Podman rather than Docker, and currently I am considering switching to OpenSUSE Tumbleweed or OpenSUSE Leap. I don’t think I’ll ever move back to using Debian, though, because, in my opinion, the RHEL platform is so much better as a server OS.

Anyway, I’ll leave with a list of software that I currently use and for what purpose:

Honestly, I would like to add Immich or something similar for photos synchronization between my phone and desktop, but the installation instructions are too rigid and I have to dive deeper into it quite a bit. The same goes for Bluesky tbh.

I have something similar at home on a small Minipc. Out of curiosity , did you separate the container into different networks? How did you design the file access, one user per app or combined?

So, when I began moving away from Docker a big impetus for that was for greater separation of concerns from a security perspective. Each service gets its own user, and if it is containerized, it’s using rootless Podman. I use Quadlet Service files, so each application system has all of its components managed by Systemd.
Unfortunately, RHEL 9.4 has a very old version of Podman with no viable way to install a newer version, so some features of Podman Quadlet seem to be missing. Quadlet had replaced the podman generate systemd command or something like that.

For mine just currently run Ubuntu server then host most everything on VM’s. Some are on windows, others are CentOS, Ubuntu, and Debian. I tend to avoid docker as I’ve ran to random issues with it and don’t feel like having to build my own containers for stuff that is not on docker. Beyond that some stuff is run on bare metal, but only for performance reasons(game servers).

You can see what I’m hosing in this post: What are you self-hosting? - #60 by xyz

Beyond that I have a raspberry pi 4 2GB and Pi 5 4GB to host my 24/7 services that I consider to be more “critical” with each acting as fail over for the other. Pi-Hole, Wireguard, and SMB share are run on these. I’m actually in the process of migrating Caddy to the Raspberry pi’s, just currently reading documentation for best practices when building a cluster.

I would look at Syncthing for photo syncing. It will also allow you sync more, such as music, documents, etc. If you are on Android you can use this: GitHub - Catfriend1/syncthing-android: Syncthing-Fork - A Syncthing Wrapper for Android.

FYI this not an official app. The original android app was deprecated this last December due to limited dev resources and google being a pain . I would definitely make sure to research the above app before yolo it on your personal device though.

1 Like

I use a Debian dom0, with Debian PVH domUs on an ASRock deskmini.

Am pretty impressed with xen, enjoying it a lot more than using the more status quo kvm.

Currently playing around with k3s within that.

2 Likes

I actually dropped Joplin for Obsidian about a year ago. How are you liking Joplin?

I use Nextcloud to keep everything in sync, which eliminates the cal/contact sync and syncthing requirements.

Is there a reason you chose not to go Nextcloud? Seems like, given all your work, there’s a lot of benefit to be gleaned from a unified instance like nextcloud.

2 Likes

I’ve been on ubuntu server. If I did it over again I’d probably do plain debian…

Any reason RHEL has been better? Seems like all I need the host for is to run docker, jellyfin, tailscale, nextcloud, xrdp, virtualbox, and some cron jobs. The rest of it are containers and VMs that would run the same on any system.

That’s mostly standard, because combining different services on the same linux system is not a bright idea (running nfs, nextcloud, syncthing, zabbix and other stuff straight on the OS is not ideal, 'cuz the more things you run, the more likely to hit a dependency conflict - I’m not talking about OCI containers, but running the programs straight on the bare OS, like through deb packages and added official repos). Virtualization, containerization and maybe other tricks like segregated packages (nix pkg kinda deal) are ways to avoid that (of course, security kinda goes out the window, since you can exploit one program for one thing and then another for a different thing and eventually gain access to the whole system).

Based! That’s how everyone should do it.

:arrow_down_small: :xbox_x: :arrow_down_small: :xbox_x: :arrow_down_small: :xbox_x:

I’ve used Debian, Ubuntu and CentOS in production. There’s a bunch of tutorials for the EL family and a bunch of official repos, compared to debian, but as a platform, I find EL to be inferior to debian. I’m also very biased against backporting kernel patches (so I’m running whatever the latest I can get). By far the worst has been ubuntu, but for a multitude of reasons.

I can’t say much about OpenSUSE. I kinda struggled with it when testing it out. I’ve ran it in WSL on my work laptop for just a few basic linux tools, but in containers any distro can shine (and even then, after a while I started having trouble upgrading the WSL stuff, so I gave up on it and wiped it).

For fast-tracking development servers, I’ve actually had more luck with Fedora (hosting the jenkins server, since it was getting newer versions faster there). If I were to redo most of these, I’d probably choose a rock-solid base (debian) and LXC (incus) for whatever container distro would ship (stable) updates the fastest.

Obviously my own lab is completely different and so are my workloads. I find myself not needing much more than a nfs server (although I might be planning for some personal communication services, which I’ve been having monologues about for a few years).

1 Like

You’re quite some steps into your HomeOS journey.

For the next step add privacy concerns to your “desire to move away from Big Tech dependence” and you start looking at the list of services and realize that you don’t want all of them to run in the same network.

The driver is typically to start separating IoT devices into their own network. But you can run this quite some steps further (separate smart TV, separate smart speakers, …, one net by vendor).

At this point you realize that some of the services need access to multiple networks and you wonder how to best and securely accomplish this. You get into vlans and enterprise networking, you start looking deeper into the fundamental services of your home network (DHCP, DNS) and realize that accomplishing a reasonable separation of networks is not easy on a single OS with docker containers because network management with docker is IMHO rather painful.

This is where lots of people migrate to virtualization platforms, such as Proxmox or Truenas or XCP-ng.

1 Like

I do use Syncthing. And I am aware of its problems on Android, which is why I put that I want to look at Immich or something for photos synchronization.

I actually don’t use it a lot tbh. But I am getting closer to the point of starting my book writing project that I’ve always wanted to do. I actually always did really well in my English classes in high school (better than in my Math classes), and my freshman comp professor really liked my writing style. I knew I wanted to do CS for my degree though. I just wish that the job market wasn’t so over saturated with programmers by the time I got out of school. I have a job, but still the job security feels bad.

I would say that I use containers by default. Some things don’t have containers, like EteSync. And other things I use KVMs.

There was something that newer versions of the Nextcloud desktop app kept doing to my home directory that I didn’t like. Plus the Password manager was really buggy. At this point, I really like Syncthing for syncing my files. I have it configured to where my server acts as the always on central sync location and that works out nicely. And Bitwarden is irreplaceable for me. There’s a bit I don’t like about EteSync though, and so Nextcloud was nicer for contacts and calendar sync as well as photos sync on mobile. Considering on trying Owncloud if I can get it working in a rootless Podman container. Oh also, I would use NFS regardless to. It is completely unnecessary the way I use it, and it’s a RO share. Though, I’ve considered dabbling in iSCSI just because.

I just like the software stack better: Firewalld, podman, dnf. Generally, I also feel like it’s a more stable platform that doesn’t break as easily on me. We use RHEL at work too, so I’ve definitely become way more entrenched in the ecosystem.

I actually have a very specific way I run the services on my homelab that takes in security and compatibility. Services have to be run as unprivileged users and everything’s in that user’s home directory. Nothing is really run as a system package except for basic administrative stuff like wireguard, nginx, and other nice utilities. I refuse to use Docker because of, in my opinion, the security problems of running the Docker daemon as root that regular users in the docker group have access to. OFC, VMs are run as root with virsh, but virtualization kind of layers security. I also layer the security systems, so like if one fails, the theory is that the others will hold.

Interestingly, my company’s IT department keeps saying that they’ll probably ban WSL from our corporate devices because of some security issue with it. Not related to the topic, just wanted to put this in there.

I’m already moving there. I’d been wanting to for awhile, but that was partially a monetary issue then. Though, I might have been able to do it sooner if decided to buy something other than a Netgate 6100. Even before that, almost all services can only communicate on the Wireguard tunnel between the server and Linode.

I don’t think I’ll run into this issue. My Firewall has always been a separate device. I didn’t think to mention it earlier, so that’s my bad for leaving it out. Conceptually, I have a lot to learn when it comes to VLANs and even what the physical wiring of that network would look like. So, yes, I am already at the stage of wanting to move to different VLANs. Though, the primary driver for that currently is isolating my work computer from everything when I WFH. Even so, giving things access to multiple networks is something that I am currently struggling with right now. I’ve been talking about it over on the Lawrence Systems Forums, actually. Honestly, a lot of things are broken right now. I think I messed up the server’s Firewall when trying to figure out how to port forward to KVM hosts :frowning: But also, I haven’t been able to prioritize its maintenance lately, and so I have had to do the bare minimum to ensure that the bare minimum works. That’s different now, so I am hitting a renewed effort to get everything sorted again.

I may need to give xen a try. Don’t you have to use a special Xen kernel?

Truenas Scale:

Ubuntu 24.04 VM (TrueNAS) running - Crafty Controller, Kali Linux container.

Windows 11 VM (TrueNAS).

2 Likes

Yeah, you run Xen on the baremetal. In Debian it’s easy to do, one apt-get and a reboot later and you’ll boot into a Xen, with your Debian install becoming dom0 (which is the privileged domain that has direct hardware access).

You’ll probably want to stick to Linux kernels built to run under Xen too - which Debian’s are.

I’m not sure how other distros compare, Debian works well but it’s very much a Build your own solution out the supplied parts situation

If you didn’t want to build your own solution then Qubes OS (desktop centric) and XCP-ng (server centric) are built on Xen too.

File:Xen-colors.png - Xen as an aside an interesting presentation of the spectrum of virtualization (it’s not exclusive to Xen)

1 Like

You found the network mgm. painful? How so? For me the important part was to understand that separate ip links needed to be set up to mirror the stuff defined in the docker-compose files.

Just got Owncloud up. I’ve been pretty lazy on doing actual work on my Homelab here lately despite unironically just dropping like $1k on a new rack and accessories.

2 Likes

I know that feel. I just built a new truenas (well, in early december) and I’ve got my system like 50% migrated.

3 Likes

Implementation is the hard part, and sticky, too.
I am most familiar with Ubuntu, so I stuck with it for the Homelab OS. It does an ok job and works for me currently.

2 Likes

So this isn’t my first time trying to load OwnCloud in my HomeLab. Last time I was trying to use a host-mounted volume. This time I just created a podman volume as they do in their official docker compose file. But I just remembered why I was trying to do it the other way earlier. My HomeLab already has the files that I want to be accessible from OwnCloud in the syncthing user’s home directory. So I was going to use the same tactic I used with Jellyfin when I bind mounted my NFS shares into the Movies directory in the Jellyfin user’s home directory. So, although it works, it doesn’t work how I would like for it to work. The host-mounted volume didn’t work on rootless podman because the OwnCloud devs assume you’re using the rootful Docker daemon. So they have this asinine entrypoint that goes and chowns all the data directories, so this conflicts with the Linux host’s user namespaces: How to debug issues with volumes mounted on rootless containers

Hello People.

Selfhosting and becoming a quasi Cloud Engineer for yourself is an amazing journey. One that leads sometimes to a professional career around it. In my Case i rock a Cent OS Stream 10 Server with docker microservices behind traefik and authentik.

Service Description Source
Traefik Reverse Proxy Router Traefik, The Cloud Native Application Proxy | Traefik Labs
Authentik Identity Provider https://goauthentik.io
Crowdsec Security Engine https://www.crowdsec.net
Forgejo Git Server https://forgejo.org
Nextcloud Cloud Storage https://nextcloud.com
jellyfin Jellyfin Media Server https://jellyfin.org
*.arr stack Jellyfin Support Soft. GitHub - Servarr/Wiki
Kavita Calibre like Service GitHub - Kareadita/Kavita: Kavita is a fast, feature rich, cross platform reading server. Built with the goal of being a full solution for all your reading needs. Setup your own server and share your reading collection with your friends and family.
Navidrome Subsonic Server https://www.navidrome.org
Wordpress Website Builder https://wordpress.com
PhpmyAdmin SQL Manager https://www.phpmyadmin.net
Grafana Docker and Server Log Parser https://grafana.com
Glances Simple Monitoring UI GitHub - nicolargo/glances: Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.
Homepage App web wrapper https://gethomepage.dev
Kasm Workspaces Container Stream. Platform https://kasmweb.com
Vaultwarden Credential Server GitHub - dani-garcia/vaultwarden: Unofficial Bitwarden compatible server written in Rust, formerly known as bitwarden_rs
Gluetun VPN Tunnel Container GitHub - qdm12/gluetun: VPN client in a thin Docker container for multiple VPN providers, written in Go, and using OpenVPN or Wireguard, DNS over TLS, with a few proxy servers built-in.

All of the above Services are built with docker compose, defined and developed using git and pushed/pulled as necessary (Forgejo Container).
Traefik works behind Cloudflare. Crowdsec protects actively the most critical services exposed on the Internet, such as Nextcloud, Forgejo, Vaultwarden, Jellyfin. Jellyfin itself, given its insecure state, works with Authentik LDAP and DUO configuration, where no one gets authenticated without Authentik dealing with DUO first. All of the Above services are Working where needed with Databank MySQL and Postgres, Valkey In Memory Databanks. All Exposed Apps are behind Authentik. The ones where the API bypasses Authentik, are directly protected by crowdsec.
It is a bit of a complex scalable and flexible stack. Traefik and Crowdsec alone are not easy to set up. But if you love to learn and have the time to give in, it is totally worth it.

2 Likes

That seems like a shortsighted design.

I’ve run into a lot of strangeness with these webapps that should just be a LAMP container with some recommended volume mountpoints. I’m sure there’s some logic behind it, but I really wish they’d provide a raw variant.


I ran into the same thing with nginx proxy manager (which I’m no longer using) because of that exact issue. It chowns all the files in the node_modules folder. And on spinning disk, that takes ages. So any time the container updates, I need to wait 15-20 minutes for it to come back up. It’s really dumb.

2 Likes

Finally got my RMA’d RAM back from gskill. Instead of 4x16GB in my main PC, going to give the home server 2x16GB to ‘upgrade’ it from 2x8GB. Found another old 1TB drive, debating on which kind of array to configure the HDD’s.

Currently has TrueNAS is installed, but may try Proxmox again. Haven’t had much luck with Frigate using the GPU for detection.

The server is downstairs by the TV and want to use it for YouTube instead of the Roku.

Starting out the plan is to do something along the lines of:

  • NVR for cameras
  • Immich
  • Pi-hole
  • Document scanner/organizer
  • Watch YouTube/streaming via HDMI to TV
  • Music streaming for my phone when not home
  • Moonlight to play games from the main computer downstairs with Sunshine
  • Obsidian or something
  • Mealie or a recipe manager that also keeps inventory to show what could be made
  • Blu-ray/DVD ripping (have an older internal Blu-ray writer)
  • Eventually a capture card for streaming

Specs:

  • CPU: Ryzen 1800x
  • Mobo: Gigabyte x370 K7 has two Ethernet ports and a U.2
  • Ram: G.Skill 32GB DDR4
  • GPU: Intel Arc A750
  • HDD: 4x1TB
  • SSD: 128GB OCZ Agility 3 (SATA)
  • M.2: Intel Optane P1600X 58GB (Used for OS)
1 Like