Software Raid, Caching and Hypervisors

I have been looking for an alternative to hardware RAID which led to my research into software RAID for a 1 node ESXi hypervisor. To solve it quickly I was thinking about passing through my HBA to a FreeNAS or BSD vm and creating zfs in raidz1,2 or mirrors/raid10. I wanted to make use of the ssd or pcie nvme and FreeNAS was my only familiarity with that. To keep the long story short driver support is bad for the nvme so linux would prob be the better solution.

So then I ended up on ceph & glusterfs however these seemed to be more about spanning storage between-in my case-hypervisor nodes and a higher abstraction layer. Next was lvm and mdadm which sounded more like what i was looking for with starting the software raid and these seem for the most part the only options available the question is which to lean on(interesting to note possible performance differences?) Sounds like both can achieve the same things and have cache features(similar performance?), lvmcache & --write-journal??? with write-through/write-back? although id assume the push would be for the newer tech lvm.

However further research came up with bcache, facebooks Flashcache and EnhanceIO which are curveballs(last two are prob not worth the effort). There seem to be many options or configurations here(and some of these caches sound like they behave differently so another consideration to take), anyone try any of these solutions or have any feedback. Still much to be investigated here. Havent even begun to look at the filesystem options.

Helpful Resources
https://documentation.suse.com/sles/12-SP4/html/SLES-all/cha-multitiercache.html

Sounds to me like your strategy is good. Maybe Ubuntu if you’re not happy with BSD support?