Create BTRFS for upgradable backup storage

I’m building a PC very soon, and I’m wondering about btrfs. Is this really a good option for my files if I wanted to keep upgrading the same PC for, say 10 years or so? The way I see it, I can pop in a couple new hard drives when prices drop, making this infinitely upgrade-able, prevent data loss and be relatively maintenance free… Am I overlooking something? I do realize also I should have backups of my backups if I want things to be as safe as possible.

That’s what i’m running, I’ve grown from 4 harddrive up to now 15 using a Q6600, 8 gigs of ram, a 4 port pci SATA card and a two port ESATA card with port multiplier pushing a mediasonic 8 bay enclosure. 256 gig SSD, 1tb scratch disk, 4 tb photos drive and 15 X 4tb in raid1
One drive is being flaky at the moment and now i’m trying to figure out how to replace it.

isn’t btrfs being abandoned by major distros these days?

1 Like

The raid 5/6 implementation is not safe for use.
Mirroring, or single devices (or raid0) are fine

Could BeeGFS instead. Is easy enough to use to the point where I got it set up.

yes but, most OS & recent mobos has bios support for raid 0 & 1 these days tho so basically there’s nothing to gain in using brtfs if RHEL (and its descendants), SUSE etc actively avoid it, which in turn will discourage most devs in getting involved.

distributed fs might be too different for OP’s purposes tho

BTRFS would be mobile, and transferable across systems, regardless of whatever raid controler is on the system.
Also it has snapshotting and self healing, which onboard raid doesn’t.

Intel’s raid is considered the most stable, but still falls out of alignment/sync, causing excessive writes to one half of a mirror, or the loss of an entire raid0

zfs would be a better choice if that’s the point to consider, i reckon

ZFS would be a better choice, and allow for more options, like raid 5, 6 and 7. It allso has built in transfer tools, and integrates into backup tools well too.
But it is a bit more bloated, requiring a lot more ram.

I’m a fan of ZFS, but have to kerb my enthusiasm.

Ehhh,… i’m not sure more ram or hardwares is that much of a problem compared to starting a storage server with a filesystem that doesn’t have any good future, obsolence even. IMHO pouring a bit more money is better than doing that. A workaround is using LGA2011 boards: used E3/E5 v2 & DDR3 RDIMMs are easily found, much cheaper & lower power than any DDR4; these will do fine for years & at low cost, at worst they cost the same as X470/X570 system.

1 Like

Far as I can gather BeeGFS is only useful in clusters of servers Plus it’s not open source (if free) so doubt it’ll gain much market share.

BeeGFS is similar to HDFS, it is meant when you have a datacenter worth of data to store and use.

There is the source code:

1 Like

My bad, thought it only free to use

1 Like

If you are referring to the “write hole” that the btrfs parity raid levels have not fixed, it’s not that big of a deal for home use. It takes two distinct failures before you hit the “hole.” First you have to have something like an unexpected power failure during a write operation and even then the inconsistent parity information is just recalculated so as long there isn’t a subsequent drive failure before it can be recalculated. And even then, it’s just that the data subject to the original interrupted write operation that is lost. For enterprise use i can see how that is not good enough; but for home use of archive data that isn’t written very often, it’s a very narrow problem.

And it’s not just btrfs that has the problem. Any parity level of “independent” disks has the same problem. The linux md parity raid levels have the same issues. They solved it by adding a write journal in kernel 4.4. But it’s not enabled by default, you have to manually specify a disk for the write journal in order to close the whole. And even then there are versions of mdadm that are up-to-date enough to utilize the write journal. For example the version of mdadm in the ubuntu 16.04 repositories is from 2013 which predates kernel 4.4. So every md raid 5 on ubuntu 16.04 is vulnerable to the write hole.

I wouldn’t let that scare you away from btrfs.

Don’t get me wrong, I like ZFS too. It’s a proven performer in the file systems game. I can see upsides and downsides to both. I’m more looking for something to start small, and grow as my needs grow. I like the idea of just plugging in a bigger hard drive and it keeps on working. I also don’t think I will need high performance, since I’m looking more for an archive rather than a power user solution.