Lots of drives; ext4 + LVM vs zfs (vs btrfs)?

Hey!

I want to wipe my system and reinstall it for that clean slate. Atm my stuff is all over the place and I don't like it, I want to organize it. Like in pools or something. As you might've guessed, I am not very educated about storage stuff.


Dat mess

There's a caveat though, I am dualbooting with Windows 10 (dat games, bro) and atm I use an ext3 driver so I can read the data from Linux partitions when I want/need to. I didn't find anything like that for ZFS. Also I have 3 SSDs (1 for each OS and 1 m.2 that I don't really use)


To get to the point:

Please recommend me a way to have an organized storage that I can access (at least read) from Windows.

  • Stick with separate ext4 partitions, pal
  • Use LVM, bro
  • ZFS or bust, mate
  • Btrfs is the way, friend
  • FAT16 MASTERACE, BOI

0voters

Please do provide any relevant info. Also if zfs is that good, I may live with 100% separate OSes.

On a side note, there is a BTRFS driver for Windows:

At least the first that comes up, haven't used it. Guess it depends on the scenario.
As far as I know (please someone correct me), ZFS needs equally sized drives to work, BTRFS can handle differently sized drives. Very much depends if you want to go JBOD style or have a RAID-type-style though. BTRFS RAID1 is stable as far as I know, BTRFS RAID5 not so much.

1 Like

Hmm, didn't know that about ZFS. I need to read more on that.

Is btrfs stable though? I heard that it's too early in dev for prod use.

Mh not sure about that, calling in @wendell on this.
When he did the videos like 2 years back he said that JBOD and RAID0/1 are rather stable. Shortly after that RAID5/6 issue turned up, not sure how far development is on that end.

Status details are here:
https://btrfs.wiki.kernel.org/index.php/Status

1 Like

You seem to know much more about all of this.

Can you please tell me how bad/good/okay it is to mix SSDs and HDDs in volumes? Atm I like to use SSDs for / and C: so the OSes boot faster, and HDDs for everything else. (Few games can also get SSD) Can I replicate that with volumes? Separate SSD volume for / and HDD for /home, /var, /opt ?

Also, if you know a good place to read on this, please do link me so I can RTFM :slight_smile:

here's the thing. You don't really need ZFS or BTRFS except in large arrays or in scenarios where redundancy is preferred. That said, if you want to use one of the two, ZFS is far more mature and stable, and is found much more often in production.

I'm fairly certain you're gonna run into trouble getting windows to read any raid aware or enterprise filesystems in general, so if win10 on the metal is a must, stick to ext or maybe xfs/ufs

another good option is to partition and mount specific parts of your linux volume and windows install as described by @mihawk90.

1 Like

Yeah the more I read the more I figure that.

I just learned that my ext3 driver for windows may be able to read LVM partitions. So maybe I'll do LVM.

Not really to be honest, most of it is just through the videos :smiley:

Also somewhat important, when you mount the drives on Windows, do they need to be writeable or is read-only enough? Not sure I would trust a beta-driver to write on my drives on an OS where it's not native, reading is a little easier on that...

As far as I understand it not a big deal, however in the end the slowest drive might slow down the whole pool if the write operations are dependant of one another. It might be a better idea to use the SSDs as cache drives, especially on ZFS.

1 Like

Yeah, read-only is enough. I can always read from NTFS on Linux side.

I'm in a similar position (self-built NAS, my drives used to be all USB on a laptop running 24/7 ...), though my drives are somewhat organized, same directory structure on all of them.

If read-only is enough then BTRFS might be a good option, of course if Windows can read LVM-ext3/4 drives you can also use that. You can test that in a VM since you can attach drive-images from other VMs as you like. I used that to test and play around with BTRFS in VirtualBox.

1 Like

Another question is, should I encrypt the drives? I don't have any real sensitive data and I don't wanna pay too much when doing disk IO, but I do like security by default.

Honestly if it's just Media content I personally wouldn't bother with it, but that really comes down to personal choice.

Dkn't use garbo ext use xfs its much faster.

I would put the HDD's in RAIDZ using ZFS with compression for /home, giving 1 TB of net usable storage and leaving 1,3 TB of raw (single) disk storage you can still use to share files with Windows (ext/FAT/NTFS).
Boot linux of an SSD, boot Windows of an SSD. Use the remaining SSD for games.

If you can spare 50$, get another 2TB disk and make it all (2,5 TB) fully redundant.

Well, in the end I decided against the device encryption, unnecessary complication for no real reason. I decided to roll with LVM + ext4. Why? LVM is simple and easy to use, I am already familiar with ext4, so that's that.

I went with separate pools for SSDs and HDDs. And ofc, separate (outside of LVM) boot/EFI partition.

Please forgive my shitty screenshots, atm I barely have vanilla i3 installed. Gotta rice it up tomorrow.