Since I moved my office back home I’m getting buried in USB drives. And rather than buying more and more usb portable things, I’d rather just set up a server and be done with it. Then I’ll setup offsite backups next
So I’m building a backup server. It won’t even be on most of the time. I’ll turn it on, run a backup of the systems I have, and then turn it off. Run a scrub periodically and be done with it. The OS for the server will be on a separate drive than the raid array (no root-on-raid).
My initial thought, was slap a copy of debian on it, mdadm raid 5 or 6 and done. But then I was able to snag me some 20TB exos drives; and I’d expect the recovery time (if needed) to take forever…
Truenas and ZFS just seems like overkill, especially since I won’t actually be working off the nas. But from what I’ve seen, if using zfs, truenas is the best way to do it (rather than install zfs on debian).
Obviously mdadm raid is always an option.
On the btrfs side, (and yes I could install btrfs over mdadm, but why? I don’t really see an advantage over ext4) it appears that native raid56 should still be avoided? I’m still kinda fuzzy in understanding what they mean in their docs though. From what I can see the “unstable” stuff mostly applies to “zoned” raid - whatever that is.
https://btrfs.readthedocs.io/en/latest/ch-volume-management-intro.html
https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices
You can see the confusion in that I’m having an issue even formulating a coherent/discrete question.
It appears that btrfs is currently best suited for a raid 0, 1, or 10 and zfs for parity style raid 5/6. Am I reading things correctly?
Depending on what happens in the future I don’t expect shrinking the array. If anything expanding. And it looks like both btrfs and zfs will do that when they are running the raid, rather than mdadm.
Any other advice/wisdom? Thanks.