It’s not distributed in a network sense. It’s just distributed across local volumes. Basic idea is, you have 3 separate ext4 filesystems and you want to put whole files randomlish across them.
One drive dies, you have 2/3 of your files, another one goes, you have 1/3 of your files.
I’m assuming you make snapshots of original filesystems (e.g. with lvm), and then run snapraid sync using mounted snapshots as the data in order to get the parity updated?
Do you need to keep the snapshots until next sync?
It doesn’t use file system snapshots, it just calculates parity as a snapshot. So When you run sync it will create parity at that point and then if a disk fails it can be restored to the last sync. The advantage is that it can be used on any existing file-system without changing the data the downside is it won’t work for data which is changing frequently, handy for media libraries and other large arrays of static data.
Right, I’m thinking I may want to snapraid a snapshot in case there happens to be a database on the filesystem… So that snapraid doesn’t depend on file not changing while it’s backing it up. (e.g. imagine you had a data=ordered type of use case and wanted to ensure data is actually consistent).
That said, I have no idea how aufs or snapshots work across multiple volumes wrt. consistency.