Build a ZFS Pool on NVMe

Question: Is it possible to build a striped ZFS Pool on 3 NVMes then put my OS on that?
I have a Raid 0 on NVME, I formatted a 500 mb partion in FAT32 for grub and my boot loader and built my raid array above that.
I’ve been using linux a little over a year (noob), and I have no ZFS Experience.

There’s documentation out there about putting your root on zfs … I think (but haven’t done it personally) that Ubuntu’s installer even handles zfs root during the install?

Read some docs, backup your data and experiment. We’re here for you if you get stuck on specific questions.

ZFS doesn’t really have “stripes”. It writes out “blocks” split across the vdevs in a pool, and it is NOT guaranteed to do so evenly.

With 3 sticks, you could do:

  • 1x vdev composed of 3 mirrored drives, for triple redundancy.
  • 3x vdevs composed of 1 single drive, for maximum data loss
  • 1x vdev composed of 3 drives in raidz1, for being able to lose one drive. This would be the most sane option typically.

ZFS is more about data protection than performance. Frankly there are lots of ongoing issues right now with high performance arrays, and you may not get what you want out of it even with extensive tuning, which is a black art in and of itself.

3 Likes

The other thing to bear in mind with a 3x NVME stripe is that unless they all have their own CPU lanes some of them will be different speeds to the other.

ZFS will (I think?) be limited to the speed of the slowest VDEV, so if you were to have say, 1x 3.2GB/sec drive on the CPU and one or two of the others on the chipset, you could see performance tank down to 3x the slowest drive’s speed (or half that, if they’re both saturating the chipset at 50% each). Which maybe isn’t really worth it?

If they’re all on CPU… well good. But not many boards have 3x NVME to the CPU directly unless you’re talking high proper HEDT, and even then, check they’re all wired to the CPU.

Personally I’d compromise and assuming they all perform the same (and maybe even if not, I’d consider taking the IO hit), hook them up as a RAIDZ1 so you have some redundancy. You’ll still get improved read/write throughput (i.e. in terms of MB/sec) - the IOPs penalty for parity RAID (dealing with many smaller transfers) you see with spinning rust won’t be so significant with SSDs as they are capable of so many more IOPs anyway that a home/single user likely won’t be able to run them hard enough to saturate THAT aspect of them anyway.

3 Likes

Sorry about that, I should have included my system specs.