Backup of OS Drive

So my OS drive is quadruple booted and I was wondering what the best way is to make a backup of it. I was thinking dd commands in Linux, but I wasn’t too sure if that would work. Not only that, but I’m worried that any form of restoring a backup to my OS drive which is a solid state drive, would utterly crush it/destroy it due to the amount of data written to it (consumer SSDs being fragile and what not).

Basically, what would be the best method of tackling this?

1 Like

IMO, if you have a larger spinning drive, use dd to backup the whole drive to an image file on the spinner. Keep a few image files backups in rotation so that you have some increments and so that if it dies in the middle of a backup, you aren’t screwed.

To do this properly though, I believe you’ll want to dd from a live USB or something since doing it to the drive while it’s in use is problematic (or maybe won’t work?).

3 Likes

Right, but I’m worried about restoring a backup of that magnitude as well. I figure something of that size would crush an SSD.

multiple options here:

https://wiki.archlinux.org/index.php/disk_cloning

writing an image the size of the device to the device is just one full cycle, it wouldn’t hurt it at all unless it’s already very near end of life

if you’re on ext fs, e2image ill be faster than dd or ddrescue

3 Likes

No, that should be fine. You just don’t want to restore an SSD image to an HD or vice versa because of the way the blocks are dealt with.

The SSD is 480 GB, would that work in an Ext4 file system or Mac OS Extended Journaled file system?

e2image wil work with ext4, you’re gonna wanna clone the raw block device if you’re using hfs+

there’s like 20 other options listed in that wiki page if you want something with a gui

I don’t use Arch

dd commands work well on Mac and Linux
I feel I’m comfortable enough to use em’

I know I can’t store the back in NTFS because of NTFS’s file size limitations

it doesn’t matter if you use arch, it’s literally just a list of universally available backup tools.

1 Like

dd is copying the drive in a very literal way, so it doesn’t matter what the filesystem(s)/partition(s)/flag(s) are. Restoring it will give you an exact copy and if there’s extra space on the destination device, it will be left blank.

You can also use dd to just copy a partition as well, but for a full disk copy, all you really need to worry about is capacity.

I have a spare SSD 250 GB, and a 500 GB HDD that I’ll use for testing. Last time I did something this huge with files, everything went to complete shit.

Like mentioned before, use dd and copy the entire block device, not just each partition and you should be fine. If you had issues last time, it could have been due to how how you copied and how you restored.

Also unless you are using a really old SSD from the mid 2000s, you are not going to destroy it. IT may get pretty slow during the restore and/or pretty hot if it is NVMe, but otherwise you are good. Relatively modern SSDs have close to or surpasses the write cycles of a mechanical HDD.

There’s a lot of duplication on this forum, we just had a thread about this like a week ago. Anyway, use Clonezilla.

https://clonezilla.org/

Regarding SSDs, they have a limited lifespan with massive write volumes. It depends on your SSD, controller, and write amplication but Samsung says its TLC SSDs can handle 1064 write cycles, meaning every NAND block is written to one time. So one full write cycle would reduce its expected lifespan by 0.09%.

2 Likes

I also rep Clonezilla but some people get scared of the Simplified Chinese writing on the splash screen.
Clonzeilla is is something that always goes with me in my digital mercenary tool box.

Currently trying

dd if=/dev/rdisk2 of=/Volumes/BackupStorage/GenericFileName

and

dd if=/Volumes/BackupStorage/GenericFileName of=/dev/rdisk2

I feel like I should have added the file extension “.img” to “GenericFileName”. Not sure if it will yield the same results or not.

1 Like

File names mean nothing to *nix systems. You can add the file extension to the file name after it is done.

Note you need to umount before cloning.

1 Like

Well aware

i would also reccomend passing dd through something like gzip to help keep file sizes low.

dd if=/dev/sdX | gzip -c | dd of=/tmp/backup.img.gz

1 Like

wtf it worked