Makes sense. I can make up a write up of what I know of each and my experiences with them tomorrow… Maybe that’ll help
Running 72tb of raw disk with only 32gb of ram zero issues
ZFS actualy works
EXT is broken
beadm (different zfs boot environments from snapshots) has been a thing in PC-BSD/FreeBSD land for… years?
Yep. I didn’t realize that was a thing though, now that I do, it’s gonna be so much easier, looks like most of the heavy lifting has been done for me.
Oh yeah with the above EXT is broken post. I just googled “EXT4 problems” and picked the top link. I didn’t even know about that specific issue.
There’s been ext4 issues since forever and looks like they’re still frequent.
ZFS is solid.
lel if it didnt require so much resources and have pointless features you have to turn down to make it still unreasonable
and more importantly if it actually worked natively in linux as it does in freebsd. no one is gonna load up solaris either. until the licensing changes for it(which is unlikely to happen until either it or oracle become completely obsolete) doubtful that will happen in gnu space, if linux goes to a bsd license or something ye maybe that could work
but we all know how stalliman feels about bsd license
Edit: nvm this topic was splitted.
Misinformation spouted by ignoramuses who looked at the real-time de-duplication requirements and forget EXT doesn’t even do de-duplication at all.
1 GB per TB isn’t even a big deal anyway in 2019. And that isn’t required anyway (outside of real time de-duplication).
ZFS was originally developed on hardware slower and with less resources than the original iPhone.
claim the argument is over
I’m not here to argue, I’m here to share my expertise as a developer working in several kernel code bases with various filesystems
You are putting words in my mouth, I didn’t say anything is over. To the contrary, I fully expected you to have a response because I left the door wide open for you.
Linus Media Group is at the entry level of what ZFS is designed for. It’s used by national laboratories and cloud providers and CDNs and enterprises. It’s engineered for use on scales orders of magnitude greater than LMG. This is what you should have in mind when you consider the resources it uses. If you keep thinking it’s supposed to be optimized for your home NAS, of course it’s going to seem ridiculous. It’s not optimized for that. But it does work for that, if you don’t mind the resources it uses. And again, the tuning information you’ll find is not the tuning for your home media collection, it’s the tuning for large deployments with hundreds or thousands of users.
For the record, this thread was split off from a different thread, so the OP was not originally an OP
I split this from the thread about ZFS boot environments on Linux.
Actually, i just had an idea.
My work PC has dual 500 GB SSDs in it.
I might try setting up a ZFS pool on the second one purely to do de-duplication. Given it is a VM store for lab environment, i’ll probably get great de-duplication rates on it.
I have 64 GB of RAM, so giving up 0.5-1.5 GB for de-duplication database probably not a big deal.
Deduplication is broken don’t waste your time
Ah makes sense then.
That’s the one thing btrfs kinda does better than zfs
I think in one of the OpenZFS leadership meetings it was mentioned that a new deduplication effort might be on the table, but I don’t recall details, I don’t keep notes.
I just use EXT4 for my home systems.
I don´t really see that much benefits of running other file systems,
for home use that much.
Oh, that’s awesome! I feel like it would be beneficial to have an offline dedupe method similar to btrfs does.
Are you on the leadership team?
I don’t think there’s a “leadership team” so much as an open invite to attend the leadership meetings. I am merely a bystander in the meetings, but I do contribute to ZFS development.
Elaborate? Have seen other people get benefit from it… but was some years back.