ZFS vs EXT

i’m aware that Zil stands for zfs intent log, but you used it whilte talking about btrfs. So my mind did a weird thing.

i am a big fan of zfs, but they do not let allow us to grow a raidz vdev, And when they will. ZoL are in some wayes still ZFS 2012.
And i have no idea if they are adding that on ZoL first

I was responding to this comment:

i see it now :smiley: im not awake yet.

Two of the major design points about ZFS that trips people up are
-It DOES NOT go back and retroactively change things when a setting is changed. ZFS writes data once, and then leaves it the fuck alone forever.
-Certain settings are limited to a pool-wide or dataset-wide scope, that sometimes can’t be changed, or when it can only affect new data that is written.

Meaning the only way to make certain adjustments is to create a new pool/dataset and copy thing over (in some cases, you DON’T want to use ZFS send, as it will preserve some of the original settings). This is fine for enterprise, where replacing a 4U rack of drives is the quickest and therefore cheapest solution to making major changes, and the admin already knows what to do and not to do.

But a home NAS guy that barely knows the command line or that redundancy is not a backup can encounter severe problems if he doesn’t know where to hang out to learn what not to do.

The greatest risk of data loss in ZFS is the admin himself. I love ZFS, but I find it difficult to recommend without being able to judge a person’s ability and willingness to put in the time required to get safely squared away. It’s a bit like a gun or lathe, it can be a wonderful tool, or it can destroy something precious due to what is typically (but not always) personal negligence.

2 Likes

Also throwing BcacheFS into the mix, which is on my “interesting thing to check on in a couple years” list, as it’s currently being cleaned up for kernal inclusion last I checked.
Supports data/metadata checksumming, compression (lz4!), raid1/10, cache levels (foreground, background, and promote)

To be fair, the only way to convert from say EXT4 to XFS or Reiser or whatever is… guess what… wipe and restore.

The only way to change from say RAID5 to RAID6 is… wipe and restore.

ZFS is no different?

This isn’t really a failing in ZFS, but ZFS does make it a lot easier to make dumb choices because you’re dealing with the RAID configuration and filesystem settings at the same time.

The only “issue” with ZFS is deduplication, and the fact that so many people either

  • see it and think “FREE SPACE!!” and try and run it on inappropriate hardware because they’re cheap and think they’re getting something for free
  • OR assume it works like other arrays that can do de-duplication that do not need the RAM requirement because they do it either off-line or as a scheduled job
  • OR - people do a little more investigation, see the RAM recommendations for ZFS and freak out.

If ZFS never included deduplication at all, people wouldn’t bat an eye at the requirements for it.

So… if you’re worried about resource utilisation on ZFS -

  • forget deduplication exists.
  • pick a non-dumb raid level for your application/requirements

and you’ll be fine.

1 Like

its a filesystem, what part of it should be doing any of that really?

imagine if someone made a filesystem for video content creators which will automatically transcode any video file placed in it into h265 or something, convert every image into a jpg

what does that have anything to deal with the software which should be defining the format for the file name(s), which blocks which files are on etc?

A filesystem’s job is to ensure files are stored and retrieved without losing data

the checksums (edit: hashes, even) ensure this.

you really need to read up on the subject it would appear

edit:
block hashes will detect multi bit errors, much more reliably than dumb parity checks.

1 Like

I’m starting to think you’re just arguing for the sake of arguing.

5 Likes

Imagine a filesystem that would read back different data than what you wrote and act like it was totally normal.

1 Like

Welcome to ext4

1 Like

An oldie but it’s short and hits some key points.

The whole panel is great if you have time, btw.

1 Like

You mean ext4?

edit:
beaten like a red headed step-child

So basically ZFS is Too good, including a ram cache (arc) for free, which for storage appliances is a huge boon, for desktops might be misconstrued as a down side?
The was I see it, is it turbo charges rust.
But you can set system properties to boost NVME queue depths, but not currently both (only arc for tiered storage) at the moment?
Until they add the queue depth as a pool property

You can also use compression to truly turbo charge anything sata. (Maybe not the most obvious thing when you think about it at first).

https://facebook.github.io/zstd/

Over a gb/s decompression rate at over 2x compression ratio. Meaning you could use half the disk space and get twice the read speeds sata is capable of in the process. :open_mouth:

Dont think you would want to do that on a desktop either, but on a dedicated storage box, maybe.

1 Like

That whole video was great watched it a while back. Brian talks are usually on point.

NVM the previous post of mine - seems like I didn’t have my facts right. Seems like L2ARC can be seen as tiered storage.

This makes me want to use ZFS for my steam game library. I already get better performance from rust due to raid and I can further improve the performance with a SSD L2ARC.

IMO we should have a ZFS pinned topic, I think there is enough interest. I’d really like to understand the above more thoroughly.

yeet

image

I think one of the best places to begin (if you want to truly master ZFS) are the books written by Michael Lucas; https://mwl.io/nonfiction/os he gets mentioned on the BSDNow podcast from time to time.

I own some of his other sysadmin books, he has a nice easy to follow writing style.

2 Likes