Shared Storage for Virtualized Environments

Here is the short version of my questions:

  1. is which is best for my setup when it comes to shared storage for my Proxmox VE cluster: iSCSI or NFS.

  2. Is there a way to create a “HA” solution that would cover two areas of weakness in my current setup. One of which is a device or other hardware failure and the other is maintenance and updates.

Here are the details of my current solution to add context to the questions. Currently, my homelab runs Proxmox VE (V7.1) in a cluster with 7 nodes (most are DL380s but three are not) and I use NFS for shared storage for ISO images, backups and VM disks.

Right now my shared storage comes from one of two places.

  1. a QNAP TS-853a with 16GB ram running TrueNAS core. This is for ISO images and backups
  2. a QNAP TS-451 with 16GB ram running TrueNAS core. This is for VM disk storage.

I do however have an identical QNAP TS-451 device that I want to use for cluster storage as well and was thinking of going one of these ways:

  1. Have one used for OS drives and then use the other for Data drives. this would hopefully remove some of the load from the current VM disk storage device and also allow me to keep things more uniform as OS disks would always be the same size and only the data disks would need to increase as data is added to the VMs.

  2. Finding a method (was thinking either Virtual IP and some replication or something like TrueNAS Scale clustering) to have them work together so that they split the load and also in the event one of the devices fails or I need to perform maintenance the other can still serve the VM disks.

  3. Another option I just thought and this sounds a little crazy but might work of would be to run an iSCI (or maybe an NFS disk) on each of the 2 QNAPs and then add them both (2 x OS and 2 x Data disks) to the VM and use something like ZFS or another filesystem to create mirrored disks on the VMs. This would essentially try and replicate the idea of mirrored boot devices and mirrored data devices.