Return to Level1Techs.com

ZFS vs EXT

file_systems
#224

neither of us are lawyers as far as the other knows and there’s no legal precedent on cddl and gpl interaction at all that I can find which means, as I mentioned earlier, it’s pointless to argue about the merits of either case if you live in a place that has a court system built on case law.

0 Likes

#225

Indeed, your initial post on the license was just misinformed. CDDL is a copyleft license like the GPL not a permissive one (more permissive but still copyleft). The license wasn’t chosen as a defense mechanism against Oracle, Oracle holds copyright on much of the OpenZFS and ZFS code, which they acquired from Sun.

0 Likes

#226

never said they didn’t hold copyright, just that the way it was branched off the original codebase was designed to prevent legal action from oracle. (in practical terms)

0 Likes

#227

I will recommend that, sure. It also has an s3 compatible service, iscsi for block storage, and a beautiful UI. Go for it

0 Likes

#228

Yeah when Oracle bought Sun the community and major contributors forked the code and continued working on new features and such. Thanks to the license, Oracle can’t use new code contributed to OpenZFS without releasing their code, effectively, because the copyright for new code isn’t assigned to Oracle, it’s assigned to the author (or their company as it were). But that was already a feature of the license. It just happened to be the case that the majority of ZFS was written by Sun employees and the copyright was therefore assigned to Sun at the time, now Oracle.

The key distinction here is that Oracle changed how they license their code, not OpenZFS. Oracle stopped releasing it under the CDDL. They can change the license terms of their code however they please, but of course you can’t retroactively change the license of code that’s already been released.

Ok right we’re trying to stop geeking out about licenses I’ll stop haha done

0 Likes

#229

Yeah exactly I was explaining it badly. Another wrinkle is, at least in the US, you have the obligation to pursue any and all copyright in order for it to be enforceable according to case law, so in cases where it’s highly conditional or problematic to do so, the claimant loses copyright if it goes to court

Granted there’s no CDDL specific precedent but there’s a very strong case on those grounds

0 Likes

#230

Blockquote

System Memory was an issue in 2005. Not an issue today. I don’t care if my system were to use up to 2gb of RAM for storage (leaving 30 gb for everything else, or 14 gb for 16 gb of RAM users). If the Ubuntu installer would let me install to zfs without crazy hacks I’d be all over it…

2 Likes

#231

That’s on the roadmap. I was listening to a podcast and they were talking about how they don’t want to release something that has a bug in it, so they’re working hard to perfect it before release.

0 Likes

#232

There’s always the TrueOS installer :slight_smile: (or FreeBSD if you’re more debian oriented than ubuntu)

1 Like

#233

This is true.

One of these days, I’m going to give FreeBSD a fair shake, but frankly, I’m neck deep in docker right now.

0 Likes

#234

suuuuure, canonical holding back because it doesn’t want to release something with a bug in it

in all seriousness though ZoL root is kinda exciting because boot environments are the next step and those are dope

0 Likes

#235

ZoL root has been possible if not easy for a while now, assuming you know how to install GNU/Linux

0 Likes

#236

possible, just not adviseable

0 Likes

#237

better than putting your root on ext4 :man_shrugging:

0 Likes

#238

Not so much for root. Your data you dont want to lose. The OS is replaceable. On most devices and computers

2 Likes

#239

Don’t forget about servers though. Reliable storage helps make for a reliable system. Peace of mind is priceless.

0 Likes

#240

so i if i tune my filesystem manually i can try to make it less asinine

to make it pretty simple terms so its easy for everyone to follow.

the entire point of having algorithms like md5, sha etc is TO LET THOSE ALGORITHMS DO THE WORK FOR YOU. when you want to see if a file is the same as another file you compare the single hash NOT THE ENTIRE FILES

by having a hash for every single block which is generally much smaller then the mass majority of files any modern user would have you are creating more work for yourself, more work you also need to store multiple copies of. of which the only benefits are mostly irrelevant as in the one time a file actually does not write correctly you would then already be able to identify which specific block is bad, or if you use deduplication you can use this same table to deduplicate on the block level which is probably only the sane reason to do it this way, eventhough again thats stupid as well, as you are directly creating fragmented files you will have to rebuild on the fly at time of request which is obviously not faster than just having a contiguous file

if you are some tinfoil hat, you wont buy x86 or amd64 because its not secure, if you have some legacy programs that require some binary compatibility which you refuse to recreate/reinvent, etc maybe you can justify buying a sun machine say in excess of a 150,000$ maybe you get to decide when you have a data set that requires completely irrelevant redundancy, poor performance optimization etc

as far as bloat, you sun zealots seem to forget, say windows is ‘massively’ bloated, but then whens the last time window used say a modest million times more ram then a similar linux system?

0 Likes

#241

You get a hash of a compressed block, which is definitely smaller than the mass majority of uncompressed blocks a user would have.

An advantage of hashing each block is that when you modify existing files, you don’t have to re-hash the whole file, just the block that changed.

0 Likes

#242

and lets say i have a 99.99 cellphone from 3 years ago that has a shitty 5mb camera, i can get pictures from 2~5.5mB as a compressed jpeg, if i have a say common block size of 4096kB you get 2000~5500/4=500~1375 is their compression ratio in that 500~1375:1 ratio? and then how much time is used compressing that block just to generate the hash to compare? how much time comparing hundreds/thousands many orders of magnitudes more hashes then just comparing two?

just saying but if i have say a wd blue 2tb shitty consumer hdd, i can still seem to easily write in excess of 50,000 files and not have a single one fail like multiple times in a row, least for me id rather it just take longer that less than 1/50,000 times then have it be slower the other 49,999+ times

0 Likes

#243

You’re right, ZFS doesn’t make a whole lot of sense for a cheap burner phone or a single drive that doesn’t do anything important.

0 Likes