Best option for mixed m.2NVME/SSD storage on linux?

Building a new PC this weekend - I have a 2tb PCI4 970 EVO for tasks I will need the speed for, but also would like to have a raid 0 array of 3 SSD’s for gaming storage and VM virtual HDD’s. What’s the best way to have the OS and home directory on the m.2 drive, and another seperate storage pool? Ideally, id like to have different ~/ directories on different storage mediums.

Also, what is the best method to create a RAID array on linux? ZFS? Can I mix and match filesystems if one is RAID and one is not?

Sorry for all the questions!

You may want to change your title to “what is the best way to set up linux on multiple drives” or something, as what kind of drives they are is largely irrelevant here and would probably distract from your answer. I can’t really help you with that question.

ZFS + Linux can be temperamental depending on the distribution. On rolling releases if you don’t pin the kernel version, it can outpace the version that ZFS is explicitly compatible with, resulting in ZFS not working until:

  • The ZFS devs increase the compatibility (which can take weeks, or sometimes a few months when kernel devs break something ZFS needs)
  • You manually patch the compatibility (which means it’s completely untested for your specific kernel, which is bad for a filesystem)
  • You roll back the kernel version and hope that all the other updates don’t rely on the updated kernel you don’t have.

Certain distros make ZFS easy to deal with, as intended. Debian based distros (or things with an LTS kernel) seems to be most friendly to it, because of the very slow update cycle. I use proxmox for example, though that’s really a server/hypervisor and not what you are looking for. Ubuntu might be worth looking into.

The equivalent of a 3 disk RAID 0 in ZFS is setting up 3 vdevs, with a single drive in each vdev. Basically it’s like a “mirror” but with only one drive. In fact you can easily add a disk later to upgrade it into a mirror. Without parity you can only detect errors, and if a drive dies, the pool dies. Have backups. ZFS has a bunch of neat things that can greatly accelerate what it can do far beyond other filesystems, in a way that benchmarks aren’t going to be able to properly convey because whether it’s effective depends heavily on the situation. However to take the best advantage of that it requires significant upfront research to understand and set up correctly.

BTRFS’s software version of RAID 0 should (I think) also checks data integrity like ZFS. But with either ZFS and especially BTRFS the performance is going to be lower due to the overhead of making sure to only return correct data. However BTRFS has a big advantage is that you should never have to worry about updating or kernel version compatibility, as it’s in the kernel itself. BTRFS will have the least mental overhead. At least until something goes wrong and it goes into read only mode.

If you don’t care about data corruption, for RAID 0 on plain linux you can look into mdadm or LVM. I’m not familiar with these.

3 Likes

You just install the OS on that M.2 drive. Normal install without any manual intervention. You add the exceptions later on. Like /home/tlat/Gamez for your 3-way-stripe. / and /home are both on the M.2, but /home/tlat/Gamez and subdirectories is mounted from the SSDs with whatever filesystem you want. All mounts and their associated filesystems and mount options are accessed via /etc/fstab file. Easy textfile to work with. Works the same with network shares, they are just another type of filesystem to mount in a specific directory.

You either go the mdadm+(LVM) (linux standard software raid) route or you go with BTRFS or ZFS. The latter offer much more than just a filesystem and easier management once you made yourself home, but pay with performance for their enhanced security (aka copy-on-write, CoW-Filesystem). Things like native compression give back performance however…ZFS has lots of handy sorcery, but the rabbit hole can be rather deep.

No problem at all. The RAID thing is handled by mdadm or BTRFS/ZFS. Linux only knows: “Oh block storage device sda/md0…so and so big, that nasty filesystem, user wants it mounted in xy, alright!”.

2 Likes

This subject is interesting to me also: I’m about to put together a new PC and I intend a 3 NVMe disk scheme for Linux: boot, system, home and a single NVMe drive for Windows.

I want to put the UEFI/GRUB partition on one NVMe drive then boot into an OS on a different, separate drive.

Is this possible? I tried the three-disk scheme in a VM but it wouldn’t boot until I made the system partition bootable. I have no idea what Windows expects these days - it’s a long time since I built and configured a Windows system.

The drives are pretty large and having a separate drive per partition might be overkill. I’m thinking I might split the NVMe drive plugged directly into the CPU into one Linux system partition and one Windows partition. TBH, it’s a bit of an embarrassment of riches when it comes to NVMe drive capacities and number of motherboard M.2 slots in 2022.

Thanks for any info.