Favorite Backup Solutions?

I’ve been backing up my home directory and creating a package list weekly to an external drive using some bash scripts.
This seems to work okay and I’m confident I could restore in the event one of my drives craps out.

I’m just wondering what solutions you folks are using on your Linux systems. Do you backup just using custom scripts, do you use some sort of tool etc?

1 Like

I use Borg. It supports backing up to remote systems, ecryption and deduplication. Incremental backups on my PC take less than a minute.

The only problem is that it doesn’t run periodically by itself, so you need a script which does that for you. Cron works of course, but only if the PC is turned on at the time of the backup.

2 Likes

Rsnapshot

Custom script which uses rsync to backup to an external drive at $work. And snapshots.

1 Like

That looks neat. Any suggested guides or resources?

Here’s their quickstart guide:

https://borgbackup.readthedocs.io/en/stable/quickstart.html

2 Likes

For other people, deja-dup.

If you’re comfortable with the command line and cron jobs or systemd-timers, restic.

Yeah, the correct answer at the moment is Borg for Linux systems.

1 Like

I found Deja-Dup — the default backup utility for Ubuntu — to be as slow as molasses, so went back to doing backups the way I was doing them on RedHat in the late 1990s.

I have rsync -a --delete /home/myHomeDirectory /media/me/myBackupDrive/ as a cron job that runs daily on both of my personal machines. The backup drive for my daily driver is a 1TB SATA SSD. About 600GB of data is being backed up. Backups take less than 60s (unless I’m spring-cleaning or reorganising).

I prefer having plain (non-compressed, non-encrypted) backups because restoration (in part or in whole) is a trivial file-system operation, and complete backup/restore failure is virtually impossible. I’ve seen far too many “Checksum error, restore failed” messages over the decades to bother with compression/encryption any more.

Once I feel the delta is large enough, I push a backup (rsync) to a USB3-connected external SSD. I thus always have a ‘not too stale’ copy of my data conveniently nearby and not connected to power.

On an infrequent basis (once or twice a year) I rotate the external SSD for my daily driver off-site.

I don’t think my personal data is important enough to warrant a more elaborate system.

gulp That’s a terrible idea.

I would recommend looking at timeshift or borg. Timeshift is functionally rsync at it’s core if you’re not using btrfs.

1 Like

Thanks folks! Some great advice here. I wasn’t aware so many options were so highly regarded. I look forward to trying a few of these out!

Nothing whatsoever wrong with --delete. I want a backup that’s basically a mirror — not some ever-expanding mess of obsolete files.

There’s no such thing as “the correct answer”, by the way.

If you want a mirror, that’s fine, but I wouldn’t call it a backup.

Mirrors were the first — and still are the most common and entirely legitimate — form of backup. You can’t just redefine words to suit your biases.

I’m not entirely sure why you are resorting to dubious means (spreading FUD) to try and de-legitimise backup methods that you don’t personally approve of. How about keeping the thread constructive, just letting people post their solutions, and allowing the OP the untainted freedom to investigate and choose what is most appropriate for their needs?

“Backing up data” has never been, is not, and will never be, a problem with only a single solution.

PS: Thanks for suggesting timeshift/borg. Overkill and unnecessary for my needs, but the OP may find some of their features to be of value.

It’s probably not the best option but Timeshift on Mint setup with a proper BTRFS volumes is pretty decent for rolling back through different system states. I can tell you that Rsync with Timeshift is not that good though especially when it comes to trying to reverse package updates, etc.

I was looking at the rsync docs and they have an option for --delete-after. Have you tried this? Is there a particular reason you delete during the transfer of data?

–delete-after receiver deletes after transfer, not during

As for the argument, I appreciate everyone’s input. What works for one person may not be the best for another, and hell maybe this post will help someone else out later on too!

Right back at ya buddy.


Mirrors were not, and are not a backup.

A requirement of a backup is that if you delete the file, it doesn’t disappear from the backup.

image

In your situation, if the original is lost or damaged, the backup then gets lost or damaged.

Creating a package list is a very good idea. Backup apps really ought to include it as an option.

It’s faster for me to reinstall a distro than restore everything from backup. If I’ve remembered to make that package list, I can create a new list of the packages on the fresh install, do a diff to get those I added, and reinstall them. (This means the repo files need to be restored, too.)

I spend most (not all) of my time on Fedora and use a homebrew post-install script to run the initial update, add repos, install goodies and tweaks.

Once a day rsync runs to copy /home and /etc to an internal drive. Conventional wisdom says I should use an external drive in case the machine fries itself, but I can’t be bothered.

I can’t recall the last time I did an automated restore. i do occasionally manually restore files after errant brain and fingers delete them.

ANYTHING I decide I can’t live without I also manually upload to Google Drive.

Making and retaining hard copies of important documents is not a bad idea, especially if you have access to a safety deposit box or something similar.

1 Like

If you --delete-after, then your backup can fail with a “write failed — no space left on device” type of error under certain circumstances.

If I (say) have a 1TB backup drive, which has 800GB of backups on it, then there’s only 200GB of free space on it prior to rsync running. If I’ve made in excess of 200GB of changes on the source drive, then the backup drive will fill up, run out of space, and rsync will fail to even reach the --delete-after phase.

If you just --delete(-during) then you delete files (free up space) as you go — making it far less likely that you’ll run out of space on the backup drive.

The --delete-before option deletes files and frees up space before the copy — making it virtually impossible to fill up your backup drive (assuming source and backup are the same size).

So, those three options (–delete-after, --delete(-during), and --delete-before) primarily map to resource constraints. If you have very little headroom on the backup drive you’d use --delete-before, if you have a reasonable amount of headroom you’d use --delete(-during), and if you have a large amount of headroom you’d use --delete-after.

What constitutes ‘very little’, ‘reasonable’ and ‘large’ amounts of headroom is up to the user to decide. Earlier this year, when I built my system, I took the opportunity to bring back a pile of archives for review. As a result, my first backup was ~925GB and I only had ~6% free space on the backup drive. Since I knew for sure that I’d be moving 60GB/6%+ directories around on the source drive, rsync --delete-after would have been guaranteed to fail. That’s why I went with --delete(-during).

I normally don’t even bother considering --delete-after until my backups are below 50% of drive capacity and most of my spring cleaning/reorganising is done (i.e. source contents are well structured and unlikely to change in a significant way).

Since my backups take less than 60s (on average) the ever-so-slight performance advantage that --delete-after enjoys over --delete(-during) is not a compelling-enough reason for me to use it at this point. I’d rather encounter fewer (preferably no) “no space left on device” errors, and have fewer (preferably no) backups fail, rather than have backups complete a few seconds faster.

Finally, a --delete-after backup that fails can orphan obsolete data in rarely-modified directories and eat away the free space on your backup drive. If you don’t religiously monitor your backup logs that can easily snowball into something that’s messy (time consuming) to clean up. I have better things to do.

1 Like

Borgmatic is one solution. But it doesn’t invoke automatically either. But it makes it easier to configure borg.

Use anacron or a persistent systemd timer units. Example