Yup, and you can also have a mix of different sized VDEVs in a pool and ZFS will balance across them.
For quite some time i was running with 2x mirrors of different sizes. Performance may not be optimal, but if you just need to expand space, so long as you can get enough disks to fully upgrade a single VDEV in your pool (or even add another VDEV), you can.
If all you care about is resiliency (vs. performance - e.g., for archive/backup/nas type storage), then adding a VDEV to an existing pool is no big deal.
The thing is, there are very few people who have the experience and mentality required to get this stuff right.
A lot of people think âoh, thatâs an easy problem!â and then when they get into the weeds they realise that actually, no it isnât.
IMHO, ZFS has the correct mindset and some of the âlimitationsâ may be annoying, sure, but mostly come down to âdonât make dumb decisions when setting up your pool, by being cheapâ and âreliability for 24/7 operation isnât entirely freeâ.
If you want max performance just run RAID0
If you care about long term storage of your data, then ZFS is imho the only real option.
edit:
also, ZFS originated at sun, with some very clever sun engineers. hence, i trust the design is mostly right. i think the proof is in the pudding⌠ZFS on linux less trustworthy IMHO⌠but still better than EXT as the design principles are right, youâre just dealing with implementation bugs.
and when your file is lets say a 5gb h.264 mp4? you just store a hash for every single block individually so you can report if that entire file is wrong when one block goes bad because you are the aryan file system?
bow down to the zfs master race try our punch with our zealot robes but leave youâre mongrel filesystems at the door
love that a shitpost got this much engagement and detailed good faith discussion but at the end of the day ZFS is the best at what it does, but only a small amount of people need its features or anything in its class of filesystem
No when the data on disk is wrong it can be corrected using the parity or mirrored copy or just another copy (however you choose to set it up).
If you have a collection of large media files, you can create a dataset with a larger block size, too, so you donât have as much overhead for metadata. Likewise if you have a collection of small files, you can create another dataset with a smaller block size, so you avoid wasting a bunch of empty space in each block. Very small files can even be fit into the space of a metadata block itself, avoiding the need for allocating a block for storing the file contents at all.
I feel like youâre the one singling out ZFS as an inferior race of filesystem. Iâm not sitting here making baseless criticism of ext4, or hassling people who use XFS. Each filesystem has its own merits and applications.
ZFS is a storage FS but consumers are fast smashing into storage sized FS problems.
Well so many normies record 4k video on a phone which means large files by default. It will only get bigger.
So unless your network via an ISP allows the cloud to work it out then the file systems on our devices needs to catch up to huge files that are non compressible. I know a log files on a linux box are super compressible.
One family holiday could create a 1TB of content. If Dad was camera happy. Well somewhat of an exaggeration.
File systems are now more than ever needed to hold and works with more data. Keep it safe.
ZFS has larger capacities for file size, partition limit, etc. than pretty much anything else and has features for changed block tracking built in.
What do you think will do a better job than ZFS moving forward, and why?
ZFS is used by CERN for their hadron collider research and iâm pretty sure they have bigger datasets than any of us. Probably bigger than all of us here at level1 put together.
The only real argument iâve seen against ZFS is that it is âexpensiveâ in terms of resource requirements (but offers integrity features that nothing else does). But as time moves on, hardware gets cheaper and those resource overheads become less relevant.
Well the âresourceâ ZFS requires on top of the most popular devices people have now are phones and there pushing 128G and 256G. It wonât be long till there 1TB and computers are already 1TB or more with a HD.
Files systems like EXT, FAT, NTFS were not designed to manage TBâs of storage over time. In fact they dont even detect most errors.
ZFS is designed to handle these sorts of capacities however, hence my confusion regarding you calling it a âstorage fsâ and that consumers are fast running into âstorage sized fs problemsâ.
I do recall a tweet from Adam (surname escapes me, @ahl on twitter - one of the original ZFS developers) that ZFS was originally deployed on hardware with similar spec to the original iphoneâŚ
Leathenhall maybe?
Was he the guy who spilled the beans about apple switching to ZFS before Apple did, causing them to stop the rollout, and create the abomination of AFS(or plus?)