When I think about reliability in a filesystem, it comes down to two things, fault tolerance and data integrity. ZFS generates a checksum with every read and write operation and copies on write, which guarantees integrity. And of course it has redundancy built-in, part of its DNA, which covers fault tolerance.
The linux implementation of ZFS isn’t great yet, and it can be very expensive to make performant, but that has nothing to do with the technology itself. These things will improve over time.
It sounds like you’re really asking about the least buggy filesystem, and that’s EXT4. It’s bulletproof.
Here’s my problem with this question. Different devices react to filesystems in different ways. I’d never use a FS with journaling on an SSD and I’d never use a FS without some form of fallback mechanism on an HDD. So, here goes my vote:
People keep rattling off those “requirements” without knowing what they’re for.
ECC is suggested if you care about data. Whether or not you run ZFS. If you don’t have ECC, ZFS is fine, you may just get corrupted data in memory. As you would without ZFS. Traditionally, ZFS has been aimed at enterprise where data integrity is the reason for running ZFS. Hence, the ECC suggestion. Having your filesystem do checksums, etc. is less valuable if your memory isn’t reliable.
The 1 GB per terabyte is for if you turn on in-line de-duplication. Which is not recommended unless you have a very specific workload. If you don’t turn on de-duplication you do not need that much memory.
BTRFS doesn’t do what ZFS does. So no, it doesn’t eat it’s lunch at all. It doesn’t compete.
Red Hat has it as an option (in fact, it was default for a couple point releases, and a major or two of Fedora, but it’s back to the secondary option now) in the installer. Not entirely sure what the decision process was there, but that’s my knowledge.
I really need to do a deep dive on BcacheFS vs BTRFS…
Yeah, there is that, but there has to be more to it. I want to know about feature comparisons, raid56 stability and production readiness.
I’d love to move my OS drive over to bcachefs but I really need to know that it’s not going to die on me, and as much as you seem to know a lot, I can’t just take your word for it.
I think the reason btrfs got upstreamed first is because they prioritized replicating zfs functionality instead of making it actually work, so all the features are there on paper (except robust caching)
bcache for example has full CoW that doesn’t tank its performance, working checksumming, and an interesting implementation of that data structure that might allow it to avoid scrubs and other maintenance routines.
they don’t have lz4 compression, replication/dedupe, or native snapshots, because they don’t want to ship them broken
they’re also a 100% independent team. btrfs has industry backing.
Probably. Same scenario with Dtrace vs. systemtap.
Dtrace actually worked from the outset.
Linux is full of bias against NIH (not invented here) whether or not it is good - partially due to the GPL, partially due to ego i suspect.
it’s a real shame FreeBSD doesn’t have better driver support, or more people on board in general. ZFS and Dtrace are both first class citizens on FreeBSD.
FSF hostility towards non-gpl components of the base system is pretty rampant, yeah
even if the licenses are compatible
that said, I don’t think freebsd or linux is good for the average desktop user, and both have their uses elsewhere (I’ll never build a storage appliance or a router on linux again, for example, but we’ve been waiting for PCIe passthrough on bhyve for years now)
Well, it’s good that they’re making some cash for it, but that’s not enough for bug bounties or hiring a team. I’m thinking more along the lines of $400k a year so they can hire 2-3 people.