Backup Strategies

Backup Strategies

I’d enjoy learning nuggets of wisdom around your backup strategy of choice, why you chose it, what you like and don’t like about it, etc.

My current setup is:

  • A small NAS made from a retired workstation that I got for free from work and threw TrueNAS SCALE on.
  • The usual big tech cloud service solutions for anything I really care about and can’t lose.

Things I don’t like about my setup are:

  • A NAS in my house by itself isn’t much of a backup in the case of fire, flooding, theft, etc.
  • I’d like to move away from ‘Big Tech’ as much as possible.

Ideas I’m chewing on:

  • I’m lucky enough to have a couple of nerdy family members and friends in the area. I might be able to be the off-prem solution for them and visa-versa.
  • Using Linode or something similar to put my own thing in the cloud.

My limitations are:

  • I’m a married dad with kids, a mortgage, and an SUV. As much as I would absolutely love to spend thousands of dollars on kit to outfit a sick homelab, I can’t. I do have access to a lot of retired gear from work, and I like the look of some of the budget conscious/power efficient USFF solutions that L1 has made videos about.
  • Whatever I put together needs to be user friendly. I like tinkering with things, and nerding out on stuff, and learning from it. The rest of the family who so graciously put up with me and my shenanigans need things to “just work” in a straightforward fashion.

Generally the off site backup needs to be an hour or more away from the area your in. Figure the destructive path of a hurricane / tornado / flood / earthquake / wild fire. You want that off site back up not to be involved in that same disaster that took out your on site backup!

2 Likes

That’s an excellent point.

Case in point on the latter: LTT has their main server(s) in the Vancouver BC area, their backup server is in Kamloops BC.

1 Like

Ok but what happens when there’s a big earthquake and BC breaks away from Canada and floats into the ocean toward Siberia?

This is the planned setup for a very janky NAS (ext4, not zfs due to resource constraints):

Multiple drives sit in front of a single, large, spinning disk. Rsync/rsnapshot of the front drives will be written to the large backing store every x hours, tarred (to preserve folder structure, save space) and then have parchive generate parity data in case of corruption. Tarred files go to offsite backup A, parity info goes to offsite backup B.

In my case, the data is relatively boring and minimal so I will use Google Drive for offsites. I would only backup data which I can’t live with losing. ISOs and the like are not worth backing up. Only a few select media items are so rare that they can’t be replaced (I’m not a photos guy).

This is a slight modification (parchive was added) to a system I used to do on a very early model of NUC a decade ago (used an external drive to snapshot a single internal 1TB, backup any important data to cloud).

I am also looking at some other python tools for home-rolling parity: pyFileFixity · PyPI

1 Like