What would you say is the more reliable file system?

I use BTRFS raid 1 at home. As a sample size of 1 its fine for my spinning rust and rag tag team of HD’s.

Id like a new FS that has all the features of error correcting, CoW, speed and pools etc. We cant keep holding on to old FS with the sizes of HD’s and amount of data people choose to save as important.

And yes BTRFS raid 5/6 is not reliable. Seems the people using BTRFS are using raid 1. /shrug I dont know why it never gets fixed.

You realize you’re describing ZFS right? It was released in 2005, which is very new by filesystem codebase standards, and it has every feature you mention.

in contrast, EXT was released in 92, XFS was 94, HFS was 85, and fat was in 77.

The way they architected all the management elements would need to be changed completely to do so, and it would break compatibility with every existing installation. The team is very “don’t measure and then cut until it sorta looks the right length” when it comes to design processes.

If you want to play around with an experimental filesystem instead of using ZFS, I’d recommend BcacheFS, the people making it have proven they’re more comptetent than the btrfs people pretty consistently.

from their site:

We prioritize robustness and reliability over features and hype: we make every effort to ensure you won’t lose data. It’s building on top of a codebase with a pedigree - bcache already has a reasonably good track record for reliability (particularly considering how young upstream bcache is, in terms of engineer man/years). Starting from there, bcachefs development has prioritized incremental development, and keeping things stable, and aggressively fixing design issues as they are found; the bcachefs codebase is considerably more robust and mature than upstream bcache.

Fixing bugs always take priority over features! This means getting features out takes longer, but for a filesystem not losing your data is the biggest feature.

https://bcachefs.org/

1 Like

ZFS is impressive. But on a desktop with requirements recommended like ECC and one G of ram per TB and same sized drives etc,

BTRFS eats its lunch on linux. Not to mention ZFS is only recently on linux.

You seem to be buying into the anti-btrfs hype.

Yes, there are problems with it. Yes, it’s not finished.

The same goes for zfs. Show me a distro that ships with zfs support. Now show me a distro that ships with btrfs support.

The raid56 issue is solved by not using raid56 because you shouldn’t ever do that anyways.

The other “problems” you claim to encounter have not been seen by me ever, and I’m running my openstack cluster and vms on btrfs.

The critical thing keeping me away from zfs is that it doesn’t support TRIM. Talk about a waste of a filesystem… I critically need performance, and btrfs provides that for me. Hell, btrfs even provides better vm performance on our SSD cluster over zfs because of trim support, even though btrfs is shit at vm workloads.

Not entirely true. Ecc isn’t needed, stop perpetuating that meme.

Also, the 1g per TB is only for active dedup.

2 Likes

Hence I said requirements recommended. I’ve only used ZFS a few times and it worked fine with nether. I ghetto Freenas’ed for a few years. I just gravitated to BTRFS for flexibility.

Ecc shouldn’t even be recommended. You get absolutely no benefit from it, but that might be getting into the weeds.

1 Like

I follow the information on the interwebs :frowning:

you don’t need ECC, and you can tune the ARC to whatever size you want, and tweak the defaults to get it running on much less than that. I ran it for years on a thinkpad with 4gb ram with no problems. If you aren’t running dedupe you need nowhere near that amount of ram

Zfs is complete, the linux port isn’t complete. the codebase is 100% there, though. Big differerence

it has since 2012

again, no. I used it enthusiastically early on, then I ran into a lot of issues with it, and the way their devs treat commonplace issues in their code is unacceptable. They let those issues pile up until they’re done re-implementing whatever feature they like at the moment unless it effects their finishing of that feature, and tell everyone actively using their project to fuck off if they don’t like it. The fact they got upstreamed and BcacheFS hasn’t is baffling to me

there are real issues with snapshots and other CoW features in btrfs that I need for all my storage stuff that would prevent me from using it even if I wanted to.

The bottom line is that there’s no magic bullet for this. Reliability on the level you describe takes a certain amount of resources, as does management, and shortcuts necessarily compromise that data integrity because it usually involves delaying writes or doing other tricks that stop it from entering parity and the metadata structure with immediacy

I’m not anti-btrfs or pro-zfs, I’d use any proper CoW raid aware filesystem that got upstreamed if I trusted the devs not to laugh at all the people losing their data while they build a widget that adds a sorter or whatever.

which is why I’m so glad BcacheFS is a thing:

https://bcachefs.org/

1 Like

That’s redhat for you. I’m not thrilled about bcachefs not being upstreamed either…

I really suppose it all comes down to required features. Btrfs works for me and the company I work for, zfs just doesn’t. If we were willing to put up with apt and it’s broken package management, I might give it a go, when I get epyc servers in for our lab, I’ll give zfs a go on Ubuntu, but lack of trim is a real killer. Trim is not supported on Linux, but it is on BSD and Solaris, but who uses those for virtualization?

As just a FS. BTRFS is SUSE and even Fedora 28 options for a workstation.

ZFS it not a default FS for the future.

I feel the pain there is nothing really kicking all the goals and Windows and Mac are just swinging in the wind with random shit that is horrible.

1 Like

Actually, Mac has apfs, which seems like it’s pretty decent.

exactly. I can’t say for sure, but I’d be a hell of a lot more likely to use zfs if red hat would just say fuck the gpl and package zfs in mainline, and use it as an option for installation.

Nothing mac is decent its locked down and only mac…

there’s rumors the next centos will ship with zfs support but it may just be wishful thinking

also isn’t the trim port like the first thing on the list right now for ZoL dev

Joyent and netflix, but that’s it as far as I can tell

An official port of zfs is coming to mac soon (and windows a bit later)

1 Like

hfs informed all the design decisions for things like Ext actually

They also made CUPS and contribute to a ton of other OSS projects.

1 Like

A cross platform next gen FS would be nice. I think that is the point.

Fat 2.0

1 Like

Yeah, basically, someone got it 90% of the way there and it hasn’t seen any progress in 18 months because people simply didn’t want to test.

From what I remember. Lemme login to github and find the issue.

Found it: https://github.com/zfsonlinux/zfs/pull/7363

oh hey, the commiter said a week ago he’d be testing the new code before merging

Man the ZFS UX on linux is so far behind

Yeah, I just realized that. The other one I was looking at was dweeezil’s code. He’s talking about testing then merging into his personal repo to continue working on fixing things.

I’ve used both ZFS and BTRFS on Linux, and I have to say I prefer BTRFS. My use case is RAID 10 not RAID 5/6 so they’re on even footing (BTRFS has a write hole with RAID 5/6).
Here are a couple of things I experienced during my extended time using both:

  • ZFS is very slow without throwing it a huge amount of cache.
  • Expect ZFS to use about 1GB of RAM per 1TB of storage.
  • ZFS may (will) stop working after Kernel updates and will not work again until a ZFS update.
  • A ZFS pool created or run on a newer version of ZFS cannot be imported on a system running an older version of ZFS (ZFS FUSE uses an older version so good luck using that to recover data)
  • ZFSs cache does not appear as cache to Linux, basically if you need more RAM (to start a VM for example) you have to wipe your arc in order to free that RAM.
  • ZFS is bootable, but it is difficult (fear kernel updates).
  • ZFS is very feature rich and the compression is quite good, but expect to use even MORE RAM if you decide to use compression.
  • ZFS will basically never lose your data* but it might
  • BTRFS is missing features (examples to come)
  • It is well known that you should NOT use BTRFS with RAID 5/6
  • In BTRFS a degraded 2 disk RAID1 becomes read-only, which sucks.
  • There is no way to use an SSD as cache in BTRFS (excluding bcache) whereas there is in ZFS.
  • BTRFS is bootable without having to fear updates.
  • OpenSUSE lets you boot from BTRFS snapshots, which is a great option for recovering from bad updates.
  • BTRFS’s real world performance is comparable to EXT4
  • BTRFS’s CoW may lose you performance on certain things like libvirt and mysql
  • BTRFS can have CoW selectively disabled on folders (like the entirety of /var for example).

I do actually use both BTRFS and ZFS. In my opinion BTRFS is a better solution for desktop use and ZFS is a better solution for server use.

1 Like

It seems like a lot of your issues with zfs are a) specific to the linux port, which is in itself a work in progress and b) a lack of understanding of the filesystem. Granted ZFS requires more tuning and configuration than anything else out of the box.

I’d argue that mdadm + ext4 is the best linux specific raid solution on desktop at present, and the functionality that BcacheFS offers is better implemented than btrfs.

Regarding the ZoL specific issues, as this is an agnostic comparison (we don’t judge hfs performance based on the linux module, or other non-native systems based on their FUSE drivers, right?):

Not an issue on native systems, and even several distros ship with zfs in their default kernels

partly a lack of tuning on your part, partly an issue with the way linux handles vmalloc() – a flaw with the linux kernel, that the ZoL devs can’t fix by themselves, due to redhat’s influence and the FSF’s general hostility to the CDDL, there’s no way for them to make changes, or better integrate zfs on a lot of different levels. It’s unfortunate, but zfs may never run at full speed on linux.

again, though, not an issue on any of its native systems.

Boot environments have been a unix mainstay since the late nineties, available on solaris and other operating systems since before btrfs, or zfs existed, it’s actually really suprising that linux went this long without them.

That said, from what I understand of btrfs snapshotting (do they have checkpoints implemented yet?) there’s a performance overhead with their implementation similar to LVM snapshots, and that’s not desirable for someone who wants to manage boot environments often.

I’m not sure if boot environments are even on the ZoL roadmap, but it’s a feature that ZFS has natively supported for a very long time on its native operating systems.

as is native ZFS, if not tangibly better with proper configuration. Also, not the case with CoW enabled on btrfs.

this isn’t a feature. It is vital that this data structure not be user-editable for the integrity of the larger storage pool. Optional CoW is worse than no CoW

this will become a feature in btrfs as soon as they start fixing all of their architectural problems to enable feature completeness, I guarantee it.

Also, while it is an issue in native zfs, the time between incompatible versions is much greater, and there’s no zfs fuse branch to worry about.

that out of the way, I’d look into bcache if you want to support a linux CoW filesystem that actually has a chance at being feature complete AND stable/bug free. The way the btrfs devs treat their issues is just as bad as systemd/pulsaudio.

1 Like