Easy-to-follow disk backup and restore solution

This guide is intended to show an alternative to Clonezilla. Clonezilla is fine and all, but I prefer rolling my own solution using dd, because I like to put my data in jeopardy. /s

This solution is more intended to show people how to send data over ssh in situations where you can’t easily attach a disk to your box, and it works well as an offline backup solution and works with a local storage as well.

Things you'll need:
  • Something to back up. I will assume it will be a Linux installation, but anything on a disk should work.
  • A Linux live boot environment that has dd (pretty much all of them have) and either gzip or bzip2, with the former being available mostly everywhere, but bzip2 is faster when compressing at maximum compression level. xz is the slowest, but saves most space. I will show bzip2, but commands are similar, just change the command and file extension.
  • A backup location with at least as much space as the data used on the disk you are backing up. It doesn’t have to be as large as the disk itself, just as large as the total amount of data you are backing up. Note that you don’t have to use a direct-attached storage (DAS, i.e. USB drive, SATA drive etc.), I will show you how to do it via SSH too (NAS).
  • For restoration purposes, if you are going to restore to a bigger disk than what you originally backed up, if you are not familiar with resize2fs or xfs_grow, you can use an Ubuntu desktop live environment or any other live USBs that come with gparted, so you can easily expand your storage through a GUI.
Limitations

Just like clonezilla, you cannot easily clone a big-sized disk and restore it on a smaller-sized disk. You can however use gparted for that, make the partitions small enough to fit on a small disk, then use dd to clone it, then put it on a smaller disk, then resize the partition again using gparted - unless you got the target restoration disk attached to the box you are using gparted on, in which case it might be easier to skip dd and just use gparted.

Other limitation is that this method is very prone to user error.

Like always, make sure that you are aware of what you are backing up and to where, otherwise you risk losing data instead of backing it up.

Note that this guide is using offline backup, meaning that the disk you are backing up needs to be unmounted, just attached to the system. I will assume that /dev/nvme0 is the disk you want to backup and /dev/sdb is the disk where you want nvme0 to be backed up to. You can find either using lsblk or fdisk -l.


First, you boot into your live environment, then, depending on your preference either attach your storage medium (if it wasn’t already attached) or set up networking if it’s not automatic. Then proceed to your backup or restore method. I will assume you are running as root, if not, just run sudo -i and you should become root.

Backing up to direct-attached storage:

mkdir -p /mnt/backup-location
mount /dev/sdb1 /mnt/backup-location
dd status=progress bs=4M if=/dev/nvme0 | bzip2 -9 -c /mnt/backup-location/disk-image.img.bz2
## or name he above disk-image however you prefer ##
ls -lh /mnt/backup-location/disk-image.img.bz2
## this is just to make sure the backed up file is there, but you will see errors on the screen if it fails at any point ##
umount /mnt/backup-location
## then you can safely reboot ##

Backing up through SSH to a NAS:

dd status=progress bs=4M if=/dev/nvme0 | ssh user@server "bzip2 -9 -c > /path/to/backup-location/disk-image.img.bz2"

Restoring an image from a DAS (note: this will delete everything on the target, i.e. the of disk):

mkdir -p /mnt/backup-location
mount /dev/sdb1 /mnt/backup-location
bunzip2 -k -c /mnt/backup-location/disk-image.img.bz2 | dd bs=4M status=progress of=/dev/nvme0

Restoring an image from a NAS (same note applies):

ssh user@server "bunzip2 -k -c /path/to/backup-location/disk-image.img.bz2" | dd bs=4M status=progress of=/dev/nvme0

This should be it for simple backup and restore to the same disk.


Resizing

If you have restored an image to a target disk bigger than the original disk, you will want to resize it. The easiest way is to use gparted, which is a GUI program, in a live linux that has it preinstalled (like Ubuntu desktop). It’s intuitive enough that I don’t need to get into it. For everything else, here’s for ext4 which most people are using.

Note: there’s a big chance you will lose data if you do something wrong. Thankfully, you should still have a backup copy (if you used the -k parameter when decompressing / recovering).

I will assume partition 1 is the /boot partition, which we don’t really care that much about and partition 2 to be your root. Again, I highly suggest using gparted if you don’t know what you are doing. The limitations of resize2fs is that if you have any partition after the one you want to increase, you have to move it, which again, is easier to do if you use gparted.

Assuming no partition follows the one you want to expand, for dos style partition tables:

## remember to always press m for help while in fstab ##
fdisk /dev/nvme0
## to print partitions ##
p
## again, look at Disklabel type: in this menu and make sure it is dos! ##
## delete the root or home partition ##
d
2
## recreate the partition ##
n
## primary ##
p
## partition number, should be the same as what it was before, assuming it was 2 ##
2
## start at block whatever ##
<enter>
## ends at maximum block, basically takes all the rest of the disk - you can mention +60G if you want a custom size of 60 GB and leave space ##
<enter>
## do not delete the partition signature ##
N
## print again to see the partition table ##
p
## to write changes and quit fdisk ##
w


## verify the partition ##
e2fsck -f /dev/nvme0p2
## and finally resize the partition ##
resize2fs /dev/nvme0p2

## then verify it's ok by running lsblk and mounting the partition and looking through the data ##

For xfs, the same applies, but instead, don’t run verify, as that command only works on ext2/3/4 FS and just run

xfs_grow /dev/nvme0p2

For gpt style partition tables, the procedure is similar, just that you don’t have “primary and extended” partitions anymore.

In the case that you have a swap partition following, say, the root partition, delete it and add it to the end. It will go something like (assuming root is part 2 and swap is part 3):
fdisk /dev/nvme0
p
d
3
d
2
n
p
3
<enter>
## and here be careful and select the end of the disk, so that you won't overwrite your root partition data ##
-8G
## minus just means that it will allocate the last 8 GB of storage from the end of the disk ##

## partition type ##
t
## select partition 3, i.e. your new swap ##
3
## usually 19 is swap type, but press L to make sure ##
19
## if asked, say no to deleting partition signature

And then proceed like you did in “recreate the partition” above, it should allocate all the space that root has used and all the rest of the disk including the old swap location, until it reaches the point where the new swap begins.

That's it, but if you want to learn what each command does, keep reading.

mkdir -p = make a folder and if there’s errors, try to fix it automatically (like creating folders recursively) or if you can’t, don’t show errors

-

mount = mount a drive to a location
umount = unmount a drive from a (or all) location (the command is not misspelled)

-

dd = command to write to or copy from a disk. It’s a block-level backup
bs= = parameter how many chunks / bytes to read / write at once. Improves r/w speeds, but can rarely make it slower. 4M is a sane option, the default is 512b, which is slow
if= = parameter to set the input device or file
of= parameter to set the target / output device or file

-

bzip2 = program to archive a file (or more) using the bzip2 compression. Same for gzip and xz with their own compressions.
-k = keep input files when doing compression / decompression. In compression in these examples, nothing happens if you use it, because the disk will still be there, but you want to use this option when decompressing, so your archive remains / doesn’t get deleted when you decompress it. Otherwise you’ll have to repeat the backup process.
-c = parameter to send output or receive input on stdout / stdin (basically on the screen).
-9 = the maximum compression level. Can be anything from -1 (no compression, just archiving to a single file, saves no space) to -9 (most compression, takes the most amount of time, but saves the most amount of space). For bzip2, there is no reason to use anything else other than -9, because bzip2 is made to be fast at compression. If using gzip and you want to sacrifice some space for speed, you may want to choose -3 or -5. xz is mostly used when you want to compress a file once, save a lot of space, but decompress it a lot of times, it’s best used for sharing files over the internet with a lot of people.

-

| = a shell feature to “pipe” the output of a program as the input of another program.

5 Likes

Yes. A person after my own heart :hear_no_evil:

1 Like

I like the wiki tools but its really offputting when I just had the mod console access a few months ago

Anyways scanning this just gave me an idea to try to make a shell script with menus to do this shit for you. I think I’ll make that my devember project.

E6 stuff has to wait right now.

1 Like

I got no idea about scripting and stuff, but this simple script will write the whole NVMe block device to a single file on the target drive, right?
And giving status, will give visible feedback while it is working (nice)

But, maybe a quick check before starting, that there will be enough space for the file?
Because even with a progress output, once the write is started, the target file is basically hosed, and a failure means no old backup, and new backup being rubbish?

This might be obvious to most of us, but maybe not?

Mostly, I would look at incorporating a logfile / mapfile, as this is a whole drive write, once triggered, and can only go as fast as the slowest link in the chain. without a logfile, it’s note resumable, have to start from square one?

Just my $0.02

2 Likes

Using xfsdump & xfsrestore would handle this for XFS file systems I believe.

True. Especially funny that I recall a long time ago Clonezilla used to not check the space of the target device before performing a copy. You would basically copy, and copy, and copy, until you reach 100% space usage and receive an error message about that.

But yeah, if one intends to script this, checks need to be made before proceeding. Pro-scripting tip: always ask the user for variable inputs (like the name of the disks or partitions) before you start working on things, so that the work of a script won’t stop mid-way to ask for a silly variable that could have been asked at the beginning. This goes for any interactive screens that take a long time to complete.

1 Like