(vSphere) CSI-Driver

Hey there,

I’m in the process of evaluating k8s at work for the whole company and so far everythings great. I just have a few questions about pers. storage. I will just quickly go through our stack:

The VMs are going to be deployed in a vSphere Cluster that gets it’s storage from an iSCSI SAN. The VMs get created with the latest cloud image of Ubuntu with all of the needed libraries installed through cloudinit. We manage the nodes with ansible using kubespray which will handle all the necessary things like adding nodes, doing rolling upgrades of the cluster to a new k8s version and so on.

There is just one thing left to do, figure out storage, but I must admit I’m a little bit overwhemled by all the different options that are out there.

I already tried out the nfs external sudir provisioner, which is fine for now to provide basic RWX functions. Since we’d like to have a native CSI driver so that we can use snapshots with valero, kasten (others?).

Since we aren’t allowed to tap into the iSCSI storage net, we can only provision storage with vSphere… That’s why I tried to use the vSphere CSI driver, maybe I’m too stupid but I always ran against a wall when I had to set up a Storage Policy in vSphere. It always complained about no compatible datastores. But since I’m a noob when it comes to vSphere, it could very well be a problem on my end.

Does anyone use it without vSAN or NFS-Datastores?

The closest I come to a fully working CSI driver is Longhorn, but with that in place I of course need to run in replica 2 mode (replica 3 should be desired but rep2 is okay for us). With rep2 the performance was roughly a third of what it was with nfs (no redundancy here, but good for testing).

Sure this is tbx but isn’t there a different solution for me? How’s OpenEBS or mayastor, has anyone tried that, yet?

I’d also love to compensate for any good help in one or the other way! :slight_smile:

Thanks a ton