I have a single server hosting my data on ZFS and a handful of client machines (mostly Linux, but also Android and possibly a Windows box soon). I want to backup some of the data to cloud with rclone. This would include selected datasets from ZFS and some folders (like /etc
, /root
and home directories) from clients. It’s just redundancy for critical data, entire ZFS pool is already backed up to my own offsite backup box.
What’s the best way to stage data from client machines while preserving permissions, keeping the backup incremental and preserving as much isolation as possible?
My first idea is to push the data from each client to dedicated backup dataset on the server. Then mount all the necessary datasets over NFS as read only to a backup VM which would only rclone it periodically to the cloud. But I’m not sure how to exactly implement it. Backup VM would probably need to run rclone as root with no_root_squash
on NFS shares so it can access everything on the ZFS pool. Should I then create dedicated user account on the server for each client machine and give it exclusive directory on the backup dataset? I would like each machine to only be able to access and update its’ own directory. Then each machine would periodically rsync the data as root to the server using it’s respective dedicated account on the server.
Or maybe I should make a backup user and/or group on clients, server and backup VM, do some ACL magic on specific data I want to backup and making server pull the data? Managing ACLs, matching permissions etc. in this case sound like a nightmare.
Lastly, I can take some specialized client-server backup software and use that. But that seems overkill for such a simple task. And I don’t want to deal with custom formats, databases or folder structures on the server. I would much more prefer to hava just a directory per client and subdirectories for etc
, home
and anything else I want to preserve from each client.