Home Lab Conversion....Planning Phase

My apologies as this will probably be a very long post…

I currently run my home lab on a number of old enterprise servers (three DL380 G5s and a PE2950 G3 along with 1 TrueNAS box and 3 QNAP NAS boxes) currently setup in 2 42U racks and a 45U rack. Which gives me lots of space for new gear and or modifications it does take up a good chunk of room in the garage. The only other equipment in any of those racks is a monitor, a mid-tower configured as a testing machine, KVM, modem, MoCA adapter, 3 managed switches, Unifi AP and some UPS boxes. In the fall of 2020, I was looking at upgrading my compute machines and my storage machines to newer and more capable machines, and eventually changing out all the switches to Unifi 10G switches.

However, this Christmas I got three Raspberry Pis to play with and I started wondering if I could migrate most of my stack to them. I moved from Linux VMs running native services (things like Nginx, Apache, Emby, and other web-based services) to running things in Docker about a year ago and loving it ever since.

I have checked and most of my software stack will run on ARM64 but not everything so I will probably keep a single X64 system for running things like pfSense, Mailcow, ProxmoxVE, ProxmoxMG and others that require X64.

Could I not get say a number of Pis together and use something like a docker swarm or Kubernetes to build something that could run my stack. Ideally, The Pis would be rack-mounted and require little configuration to add them into the cluster and could be removed with little work as well. I could either go with an SSD on each one for local storage for the docker containers and the network storage for the rest or should I do something that only uses an SD card for the Pi and network storage for everything else.

I would ideally like to trim down the size of the racks to something that would be about standing desk height and also cut the volume of the machines as the enterprise gear is quite loud. Though I am not sure about moving away from my plans to swap the QNAP boxes out for TrueNAS boxes in either enterprise machines or custom-built machines (looking for 12 to 16 bays for storage and 2 bays for OS).

I am looking for some guidance on which way to focus my learning to build a test deployment using my 3 pis as a proof of concept. As well as any other advice, guidance, input, comments that will help with these ideas I am working through during this planning phase.

PS: I hope I picked the right topic for this post.

I’m kinda in the same boat. I have old enterprise steel that I’ve been running for homelab stuffs and want to start pairing down (mostly for power consumption reasons). If you have the switching and pi4’s I would look at using network storage for k8s (and something like nfs storageclasses). Exporting nfs from your qnap would probably serve for persistent storage. I’ve played around a bit with using SD cards even for ephemeral storage for pods and am kinda done with it. Are you looking to do HA controlplane?

I am planning to get as much into HA as possible, though not a 100% requirement. I was thinking of using CIFS or NFS to mount 2 folders on each Pi, one for the container storage that would be mapped to the default location and the other for images that would also be mapped to the default location. This should allow for containers to migrate between nodes with little downtime and without needing to always pull the image and not lead to multiple data stores for the containers.

I ideally plan to scale down from my two 42U and one 45u racks to 2 or 3 18 or 22u racks, depending on how much i need for storage for things other than the Pis.

If you can avoid mounting storage in the, “classic” sense and use storagesclasses/pvs/pvcs to persist data it’ll be easier in the long run. Especially if you are relying on helm (or operators for that matter). Not sure about cifs, never used it with ks; would take a look at the SC coverage there. I’m sure someone has a good on out there. Unless you ryo the controlplane, better to get HA done early 'cause it’s a royal PITA to do later. If you plan on running a registry in your lab harbor has been pretty solid in the 2. release. It’ll give you a pretty costless way to pull down images via its proxy. Are you trying to share the /var/lib/whatever with the container images between the computes?

I havent played with storage yet and plan to today. Have been planning around using docker swarm but might also have look at Ks3 ans rancher with their storage module as well.

I am comfortable with docker but most of my hosting experience is using proxmox and VMs

Storage is easier on kubernetes than docker swarm imo, because the master maintains knowledge of all pvc (persistent volume claims) and their locations. Additionally, you can build an nfs auto provisioning storage class to make network storage provisioning fully automated.

I’m currently rolling out my network with a k3s cluster of pis. Tried docker swarm and ditched it due to a lack of reliable storage and container/pod placement.

What was your setup method for K3S did you add something like rancher and longhorn to it as well?

Is there a way to use docker-compose or a similar YAML file in K3S?

Ansible.

Kubernetes uses manifests. It’s much more complex than docker compose but also much more powerful.

There’s also helm.

But that’s a templating engine and a sort of “app store” for ready-to-go application stacks.

I don’t like it haha

So I did some testing and copied the volumes directory from each of my nodes into a temp location on each node after stopping the stacks. I then mounted a CIFS share on each of the nodes that was mapped to the docker default volumes directory. Once I copied the volumes from each of the nodes into that mapped folder and restarted the stacks everything worked great.

However when I did the images directory it didn’t work and only the main node where the images came from was able to start any containers the rest failed.

Other than hosting a registry which still requires each node to have a copy of the doxker image or having the docker images sync between nodes how do you have shared image storage?

I’m running on raspberry Pi 4s and would like to not have to add a second storage medium to house all the images but I could if that’s the only option for docker swarm.

Or do I bite the bullet and learn k3s, rancher and kubernetes while still using images from docker hub

Yeah, you can’t network mount docker images. Each node must download and manage it’s own images.

I’m running all my pi 4s on USB ssds. Cost me $50 per node for 480GB of cheap flash and it’s 10x faster than SD.

Nothing wrong with using dockerhub images on kubernetes.

Also, for the time being, docker is the container runtime for k3s. Eventually, it’ll switch over to CRI-O, but that’s not happening for a bit.


Frankly, I think dockers time as an enterprise-class container runtime is limited. They want to do away with swarm, the company behind it seems to be floundering, and even redhat is making a competitor.

The cases I have for the 3 pis I do have are from Argon One and have the m.2 slot, I did pick up 1 SSD and it was like 60 bucks for 250GB. so I could do that but I am wanting to rack mount the final set so I need to find a mounting solution that Ideally would then allow for POE power and a usb3 SSD either 2.5 or m.2.

The end goal would be somewhere in the neighbourhood of 12 to 24 pis running as a cluster though there is still the ARM vs X86 issues I have with a few software titles.

One of the things, I am liking about the pi is I can flash the image to an SD card. run a script on my computer and have the node all setup and added in a matter of minutes. I assume this can be done with an SSD as well.

I’m currently designing an enclosure that you can 3d print and snap together with hot swap pi sleds. The plan is to support a rack mount solution but I’m not 100% on that quite yet.

Yeah, I’ve got ansible scripts to do that. Though, I’m using manjaro arm.

1 Like

I have been using Ubuntu ARM which has been working really well. My typical testing setup is to flash it to the SD card. then connect over SSH and install docker, a fan script, any updates and change the main user to administrator from ubuntu and setup passwords. Next, I build the swarm and add the other 2 nodes before installing Portatiner as I have not found a better GUI for checking how things are going though I deploy almost everything via CLI.

I was thinking about using a 4U Drawer as a base for a pi rack setup and using my Argon One cases and building some sort of method to slide them into the draw sideways or something with fans in the back and slits in the front of the drawer for airflow.

I ragequit that distro when apt install docker installed a snap package.

I’m sure you can accomplish the same on ubuntu tho.


I’ll share cad when I get further along.

Ya it will use SNAP by default but I found an instruction set from docker that allows you to add the repos and do it directly from them with docker-compose.

Will your mount be done horizontally or vertically?

There is one on Thingiverse that does both orientations in a 6 or 12 pi configuration but no room that i can see for POE and a storage medium other than the SD.

1 Like

I’m leaning towards vertical.

Yeah, that was my primary issue. It just doesn’t support performance.

1 Like

Let me know how your mount is coming along, I would be happy to help with ideas and assist you in designing it.

I will. Nothing will progress this weekend.

Oh nice! I want to set up something, but most of the ones I saw don’t have much space for a poe hat or a cooler, or for a ssd. I hope you can post pix? :slight_smile:

I made some progress using some x86 VMs and NFS off of freenas to get docker swarm running with shared persistent storage and I seem to have watchtower running as well too.

Next ill try it on the Pis to see if it works in ARM64 land