Linux server backup solution

A little while ago I repurposed an old dell server we had as dedicated storage with owncloud/nextcloud. Originally there was no backup solution planned for this machine as anything on it was stuff that was too big to email. Now we’re considering utilizing it more as our DS is getting long in the tooth.

I would like to back up the whole OS as theres some configuration that went into getting it working with our domain. I could probably reconfigure it or even back up the configs to use on a reinstall but it seems to me like it would be easier to just back up the whole thing. Its all one contiguous partition on BTRFS. dd seems like the best route to go to me but I believe the drive has to be unmounted to use it which means I would have to take the system down and do it manually. Any other suggestions or ideas?

1 Like

Backups

Elkarbackup ( https://github.com/elkarbackup/elkarbackup )

Backup solution based on RSnapshot with a simple web interface.
ZBackup ( http://zbackup.org/ )

A versatile deduplicating backup tool
Burp ( http://burp.grke.org/ )

Network backup and restore program.
BorgBackup ( https://borgbackup.readthedocs.org/en/stable/# )

BorgBackup is a deduplicating backup program.
Obnam ( http://obnam.org/ )

Network backup and restore, with snapshotting, deduplication and encryption.
Backupninja ( https://labs.riseup.net/code/projects/backupninja )

Lightweight, extensible meta-backup system.
Bareos ( https://www.bareos.org/ )

A fork of Bacula backup tool.
UrBackup ( http://www.urbackup.org/ )

Another client-server backup system.
Lsyncd ( https://github.com/axkibe/lsyncd )

Watches a local directory trees for changes, and then spawns a process to synchronize the changes. Uses rsync by default.
Amanda ( http://www.amanda.org/ )

Client-server model backup tool.
Yadis! Backup ( http://www.codessentials.com/ )

Yadis! Backup is a real time Backup application.
Rsnapshot ( http://www.rsnapshot.org/ )

Filesystem Snapshotting Utility.
Backuppc ( http://backuppc.sourceforge.net/ )

Client-server model backup tool with file pooling scheme.
Restic ( https://restic.github.io/ )

Restic is a program that does backups right.
Bacula ( http://www.bacula.org )

Another Client-server model backup tool.
Duplicity ( http://duplicity.nongnu.org/ )

Encrypted bandwidth-efficient backup using the rsync algorithm.
kvmBackup ( https://github.com/bioinformatics-ptp/kvmBackup )

A software for snapshotting KVM images and backing them up.
Snebu ( http://www.snebu.com )

Snebu is an efficient incremental snapshot style client/server disk-based backup system for Unix / Linux systems.
Relax and Recover ( http://relax-and-recover.org/ )

Bare metal backup software.
SafeKeep ( http://safekeep.sourceforge.net/ )

Centralized pull-based backup using rdiff-backup.
Attic ( https://attic-backup.org )

Attic is a deduplicating backup program written in Python.
Box Backup ( https://www.boxbackup.org/ )

Is another backup system.
Duplicati ( http://www.duplicati.com )

Duplicati is a backup client that securely stores encrypted, incremental, compressed backups on cloud storage services and remote file servers.
Cobian Backup ( http://www.cobiansoft.com/cobianbackup.htm )

Cobian Backup is a easy Backup software.
Deja Dup ( https://launchpad.net/deja-dup )

Deja Dup is a simple backup tool with GUI.
Duply ( http://duply.net/ )

Duply is a frontend for duplicity, which is python based shell application that makes encrypted incremental backups to remote storages.
rdiff-backup ( http://www.nongnu.org/rdiff-backup/ )

Incremental File Backup software.
Mondo Rescue ( http://www.mondorescue.org/ )

Mondo Rescue is a GPL disaster recovery solution.
Bup ( https://bup.github.io/ )

Very efficient backup system based on the git packfile format, providing fast incremental saves and global deduplication.
Ugarit ( https://www.kitten-technologies.co.uk/project/ugarit/doc/trunk/README.wiki )

Ugarit is a backup/archival system based around content-addressible storage.

Are you going to recommend one? Surely you havent used all of these.

2 Likes

Generally speaking, backing-up your homedir(s), /etc/, and a list of installed packages will pretty much take care of what you need.

If you want to backup entire volume images, use Clonezilla.

https://clonezilla.org/

If you want to take incremental backups of specific files/dirs, use Duplicati.

1 Like

The problem with clonezilla is it still wants the volume unmounted, which im trying to avoid if at all possible. It looks like BTRFS supports cloning while mounted so it should be possible. I dont need a block level copy necessarily, but I do want something that will ‘just work’ on restore so there would be minimal configuration on my part and therefore minimal downtime.

1 Like

I wouldn’t necessarily recommend it, but you should be able to clone a LVM snapshot.

No, just dumping all open source solutions with brief descriptions.

Link intended for @Adubs , not @Ruffalo

dd if=/dev/sdX of=/dev/sdY bs=64K conv=noerror,sync

You can also use Partimage to mirror your drive and/or Timeshift (rsync)

I gotta say, Ms might be shit but at least they got backups figured out. I don’t understand how Linux doesn’t have a similar system. Windows has had vss for a while now.

I don’t see how duplicati is going to work unless I back everything up, which doesn’t sound like a good way to do things or at the very least sounds like a good way to break things if the restore is to a different system. There’s no way I could get away with just backing up a few folders either. There are files required to make this system do what it does scattered all over. Any one of them being missing would render the system useless upon restore and I would be forced to set the system up from scratch again and restore just the data. Any tool which would work to make a 100% working upon restore backup is low level and requires that the volume be unmounted.

It’s kind of silly really. There has to be a better way.

1 Like

Like I said, you could create images of a LVM snapshot. But really whole system restores aren’t the Linux way, you should just backup you home dirs, /etc, and a list of installed packages.

I enjoy using tar to backup.

That won’t fly in an Enterprise setting. There’s no way.

@moderators just go ahead an lock this thread please.

1 Like

I never suggested it for the enterprise, there everything is virtualized and backed-up with something like Veeam.

In enterprise… Any backup solution that works is acceptable.

EDIT: Just document it… for DR purposes.

Actually, yeah. You’d be surprised how unsophisticated some enormous, world-class operations are. I recently interviewed an operations lead for one of Amazon’s major US AWS regions who told me, to my disbelief, that they have no automated incident handling. Events come in and are dispatched directly to engineers who manually fix every. Single. Thing.

… and test the restores periodically.

We have an Oracle DB at work that just grew from 24TB (90% in use) to 36TB. I manually scripted the backup. After a few hardware upgrades and a platform migration (ppc to x86) the backup time went from 30-36 hrs to under 8 hrs.

Its not a tier 1 app, so having real time data replication off-site would cost way too much.

I felt accomplished.

That is included in my choice of the word “works” .

Backups aren’t DR in isolation. If those backups are synced to a separate datacenter they could be considered a DR with a high RTO and likely RPO as well, but I don’t know any enterprise client that would be satisfied with those targets.

Typically DR sites target a <4 hour RTO and <15 min RPO (ideally 0 second RPO), and the only way to do that is continuous replication, synchronously if the data link and application supports it.

@nx2l: Good god, PPC? How ancient was that DB? Anyway, setting up an Oracle databroker asynchronous standby is pretty easy and would be my suggestion there, if you do need offsite DR. I wouldn’t use replication on Oracle for DR as it has a very specific definition. Or rather several, materialized views, multimaster, goldengate, etc.

Also I assume you’re using RMAN for backups, not datapump/export. That saves a ton of time too.

RPO =0 sec and PTO of < 1 hr is targeted when an application is Tier 1…

something like a Tier 3… then just making sure it can be recovered is key.

it was running on a IBM Power 750.
No the DB is ASM, so have to backup the raw LUNs and replicate it offsite. (rman/datadump exports take longer than current method)

EDIT: sorry about the off topic tangent