Kubernetes with shared storage

I’ve spent a considerable amount of time trying to set up kubernetes with storage on an nfs share which runs on my truenas server but it has proven impossible. all I want is to be able to have gitea, droneci and nextcloud for starting out on a 3-node kubernetes cluster I set up on 3 VMs that are on proxmox. I tried the nfs provisioner with older versions of the gitea helm chart then recently I tried creating a pv and a pvc on that manually then telling the gitea chart to use that, tried using rancher to do all that but it would fail too often and you had to guess what the web gui was doing to understand why things were failing. basically I’m stuck and hoping that there is another person attempting to go down the same path of insanity that has figured it out or anyone that can help

Have you tried mounting the nfs share locally on all nodes. You can use bind mounts then, I guess. I was testing k3os with nfs share on a pi and that worked with the nfs-provisioner… What do you mean by “failed too often”?

I have thought about mounting it on all nodes but it would barely be an improvement over the default setup which also uses hostPath and since I’m not doing this due to lack of storage space I crossed out that option. as for rancher failing too often, pods wouldn’t start even though everything seemed fine from the web UI and I was using all the same settings and values I was using in manually deployed helm charts and also the nfs share would disconnect randomly even though I have another one mounted on proxmox that never fails and I even did a 24h file transfer test on the exact nfs share that was for kubernetes data. lastly, rancher required a few too many resources and I only have a single box to run all of the stuff on which means that if I’m trying to do this properly with multiple kubernetes nodes I’d also need HA rancher something that my ram capacity can’t handle

Been trying to get the same thing working. Haven’t actually used the provisioner but it appears to have deployed successfully.

When exactly are you getting problems? When deploying the nfs-provisioner? When trying to actually use it?

sorry that I’m nearly half a year late. the solution ended up coming in the form of GitHub - democratic-csi/democratic-csi: csi storage for container orchestration systems which is a CSI driver that is truenas-aware and zfs-aware and I wrote a little about the road to that here Homelab Current Form if anyone is interested. the problem was that kubernetes 1.20 broke nfs-provisioner due to it not being a CSI driver. there is also GitHub - kubernetes-csi/csi-driver-nfs: This driver allows Kubernetes to use NFS CSI volume on Linux node. and GitHub - kubernetes-sigs/nfs-subdir-external-provisioner: Dynamic sub-dir volume provisioner on a remote NFS server. but I haven’t tried those two so YMMV

1 Like

Also late but I have some colleagues you have used https://longhorn.io/ to create and manage their storage. I’m more cloud background myself so don’t fully understand your context but it might be worth seeing whether the former is of any use for your problem.