Not against all the fabulous setups that have been shared here and over at /r/homelab and /r/datahoarder, but it might be a good time to reflect on the data that you hoard and if you actually care about it.
I wrote down my thoughts on this and how I try to deal with this digital disease, and I hope that it helps others evaluate their setups as well.
I’ve run btrfs for a while in various configurations, but always felt like setting up the configuration for automatic snapshots was a hassle. snapper isn’t the best tool out there (doesn’t handle some failure scenarios all that well), but it does its job fairly well.
Do you remote backup your snapshots? I built a btrfs send | zstd | rclone cat workflow (bit more complicated, but that’s the gist) for one of my systems that’s triggered by systemd after snapper finishes, and I’m wondering what other folks do.
Personally i don’t, I’ve relied on restic to get the job done so that I can have an encrypted backup of all the files on my server. it is a bit resource intensive though, the initial scans and index building takes up quite a lot of time when you have a couple of terabytes of data.
Interesting point though, i’ll have to think about it and see if I want to do something similar with my setup at some point!
Wrote this one mainly to inspire people who might not have the resources for fancier server setups, but who have access to a cheap used laptop that they can use as a starting point to their homelab/self-hosting adventure.