What to do for storage in K3s

So I am looking for storage for my k3s cluster I am going to build. Right now I use docker but I am looking to move to Kubernetes. However, storage is a challenge and I am looking for solutions. In my cluster I will be running a bunch of self hosted services such as Nextcloud, Matrix and others. I don’t need stunning performance but I need it to be reasonably usable.

My first thought was Longhorn. However, it seems a bit overly complex for me. Also, I am still learning Kubernetes so I would rather not try to figure out a whole new system. Also Longhorn adds additional overhead which I rather not have.

I do know that kubernetes supports NFS pretty well. There are various ways to do NFS and using it would add a lot of flexibility However, I have heard horror stories of corruption. I do not know if it is safe to use NFS as I know it can have performance issues as well.

Another option I could use is clustered filesystems. I know that Ceph is used for Proxmox but there are other options as well. With Ceph it seems like the requirements are very high which is very off putting and with GlusterFS everyone complains about bad performance.

The last option I considered was doing some sort of DIY storage solution with tools like rsync or btrfs send. This seems like it has the most risk but hear me out. I am going to only be running on replica of each container so I only need it to be in use on one node at a time. My though was that I could use init containers to pull down the latest version of the container data and then periodically push it back up. I don’t like this solution as it “going off the rails” so to speak and is likely to cause corruption. However, it might work provided that it only syncs complete writes.

Update:

Something else just occurred to me. I could use Proxmox HA to hold the NFS share assuming NFS proves to be reliable. All I would need to do is to setup ZFS replication and then HA for a LXC container. It would be much simpler and should work pretty well. I’ll have to test it to be sure.

I use Longhorn and NFS.

Mass storage is done on NFS, longhorn handles smaller persistent volumes. Longhorn resides on flash, NFS is backed by a ZFS array with 8 spindles and flash ARC and ZIL.

1 Like

If you don’t want to tinker just go for NFS GitHub - kubernetes-csi/csi-driver-nfs: This driver allows Kubernetes to access NFS server on Linux node.
Getting HA nfs i a bit tricky and last time I’ve used nfs-csi it had no support for quotas.

If you want real NFS HA look into ganesha-nfs

I personally use rook-ceph, it’s relatively easy to setup once you know what you are doing and you get both RWO and RWX PVs (and object storage)

If it’s a small cluster you probably won’t see huge cpu or network usage.
I run rook-ceph on k8s cluster with 3 worker nodes. Each worker node has 1 MON and 2 ssd OSD. I’ve just checked, on node with 52 day uptime ceph OSDs uses about 1300MiB of ram each, MON uses 450MiB.
Ceph also allows you to lower default OSD memory target to around ~1GiB.

Man, Rancher really was a game changer. Longhorn looks great. Gonna play with it a bit more after reading that summary.

1 Like

There is a fair bit of overhead with it though, so please be wary of that.