Lets talk backups! (share strategy, knowledge and resouces)

I'm well aware of the advantages of zfs and btrfs (watched the two "is raid obsolete?" videos which really widened my perspective.

but i always thought that my storage needs and budget are way to small to really profit of those advantages:

  • The server runs on an old dell with only 3 sata ports onboard and obviousely no support for ecc memory

  • I only need one drive worth of storage and multiple low capacity drives will be more expensive and will seriousely hurt power efficiency. And i'll need a good pcie to sata expansion card (not raid card)

And i really dont need much redundancy

The backups arent critical and i can make snapshots of the os to an external harddrive along with any other data

I use BTRFS on all my linux devices that run a DE. I use it on my work laptop, my desktop, and my travel laptop.

ECC isn't all that necessary. Read this. People overreact when talking about ZFS killing data with ram errors.

Just so you know, ZFS is more of a "server/enterprisey" solution and BTRFS feels like more of a home use or more robust COW variant of ext4.

That's where BTRFS and ZFS come in. They support native snapshots, so you don't need to rely on something like LVM to handle it for you and bloat the snapshot size.

Y'know, for someone who's been messing around with computers for nearly 20 years, hanging out on this forum makes me realize I know next to NOTHING compared to you people. And I feel like half the stuff you say I have to sit and think about before I understand it.

I barely understand networking, and I'm a complete and utter noob when it comes to Linux, which is my new daily driver OS for some time now. Which I have not figured out a good backup strategy for yet because most of my systems in my house are still predominantly Windows based. But here goes.

My backup solution is probably all wrong, and my server setup is probably set up all wrong too.

I have an older AMD Athlon 2 dual core pc with 8GB or RAM I built years ago serving as my server. On it I'm running windows 2012 R2. It is "taking care" of my Active Directory on my Network because I need security features that limit my kids to certain websites and certain folders on my nas that aren't for adult consumption. It also gives me a chance to limit what they can change on their computers and what they can and cannot download. They're still like 8 and 9. So sheltering them as much as I can from the horrors that the Internet and the world can be till they're ready.

On that server I physically have a 5 HDD Bay enclosure from StarDock installed. In it I have most of my HDDs. I have 2 other HDDs installed in the normal HDD bays in the case.

On the server software wise, I am also running a VM of Windows Server 2012 R2 Essentials. This is where I store all of the Documents from various places in the house, all of our pictures that need to be copied from the various machines through the house. My Pc when I'm booted into Windows, my wife's Win10 laptop, my older son's WinPc, and our living room HTPC Backup. (setting the HTPC was a bitch so i want a bare metal restore on that bitch if I need to)

The VM is also storing all of my movie media. Some of that media is older movies that are no longer on DVD and cannot be replaced. So I have a program on the VM that duplicates those onto a separate external hard drive whenever I put on a new file, or will copy back over the duplicates if I have to get a new drive. I can't remember the name of it right now. It's been years since I had to mess with it. It just works.

That VM server is also part of the Server Pool with the Host OS.

That VM is stored on it's own separate hard drive. And the storage drives are just passed through to the VM through virtual box. Same for the drive for my VM OS drive.

I also use my essentials server as my cloud storage server because it has its own website it can set up for you for remote access of files and even of computers on the network at long as it is a windows based machine.

I hope I made this understandable for everyone. If you have questions or want to provide advice, please do tell.

I have a bunch of virtual machines and couple of servers running linux, they all run a script which stores a copy of /etc and /home and whatever other important configuration files I need as a tar file on each machine. Then on my NAS I run a script which copies that config from each machine to the storage on the NAS. I also use btrfs or zfs for the virtual machine storage and use snapshots so I can roll back each machine to a previous version if necessary. I do hourly, daily and weekly snapshots of the VMs. In addition to the config files I also copy the weekly snapshot to the NAS so I can restore the image quickly if needed.

The backup script on the NAS also copies other important files, usually also on the NAS. So the NAS backup is just a 1 to 1 copy of the current state of everything. After making the NAS copy it then copies everything from the NAS backup to a separate backup which is on a 2TB ZFS mirror. I use snapshots and deduplication to store daily snapshots for a month and monthly snapshots for a year. This way I can access previous backups without needing a ton of space.

I then use crashplan to backup everything (except the VM images as they're too big and are updated every week) to cloud storage. I also use crashpan to backup a couple of windows systems to the ZFS backup pool.

Here are the scripts if anyone is interested.

This is one of the scripts which creates the configuration backup on the linux servers:

This is the main backup script, at the end it runs my snapraid script which I use for redundancy on my main pool:

2 Likes

Keeping you up to date!

github repo

Works for me, tested it with arch client and server.

This resonates with me more than you know. Networking is a pain in the ass no matter who you are. At least that's what our network guy says.

That's a good thing. AD is extremely good at making this sort of thing work well.

What I took from this is that you've got a pretty good backup solution, but your limiting factor is no offsite. What happens to your pictures if the server fails? I'd recommend finding a place to back them up. I recommended Crashplan in a post above, and I'd absolutely recommend it again because of it's ease of use. It's not bare-metal restore, but for files, it's perfect.

I forgot to mention I back up everything that is documents and pictures to Microsoft OneDrive. Cause it was free and came with my used laptop when I bought it and paid for it to be fixed. I got a free trial for a month to the 5gig plan like a year ago, and now I use the free space. Any and all old photos past a few years ago is backed up on a DVD that is in a bank safe deposit box.

I'm oldschool.

All my data is on my NAS and I back it all up manually.

3x 5TB Seagates for my movies etc. If the NAS and the backup fail, I still have the original Blu-Rays so I can re-rip them as needed. I usually only copy the new stuff to the HDD, not using any tools either. I manually compare what's on the NAS and what's on the backup disk before I copy stuff over.

For my regular data (about 300GB) I have six 2.5" 1TB drives. every month I make a full copy for local storage and a full copy for off-site storage (at a friend's place). The other 4 drives contain old backups, the oldest of which gets wiped when I want to make a fresh backup.

Most people who do manual backups tend to be really strict at first and then stop doing so.
I'm no exception, I'm afraid. Just noticed that it's been almost 4 months.

1 Like

I also am this way. Which is why i've moved to automating it all and only confront it when I have a problem.

Especially after my stroke. My memory is SHIT now compared to before, and it was bad before.

Thank whoever invented Post-It notes!

hey guys anyone got any experience backing up android phones???
preferably wirelessly over a networt to a nas

My backup solution is very unwieldy. I need to refine it.

Basically I have a script written that uses rsync to backup directories on my fileserver to various external hard drives I plug into my desktop. It has come down to me doing it only when I do something major to the server, which is once every few months. Not ideal.

My next step is to set it up so it automatically runs a backup script when a specific hard drive is plugged directly into the server. I need to figure out a way for it to tell me the backup is done. Something to do with the PC speaker on the motherboard, beeping until the hard drive is unplugged. I have a lot of info on how to start the correct script based off of each external hard drive UUID.

My long term goal is to set up a small form factor 'NAS' (ITX-based) that turns on once a week to backup the main fileserver. I'll do the external hard drive script on the backup NAS maybe once month.

Crontab runs a daily script to create separate tar files of /home, /etc, /boot, /var as well as a text file of the installed packages and all file names includes the creation date. These files are then transferred to the NAS and the script also deletes files older than 8 days. Each week the NAS is backed up to a removable drive that I store at work as my offsite backup. I have two of these drives that swapped locations weekly which gives me a total of three weeks of daily backups.

Photos are the only files stored in the cloud also.

I use Duplicity in a cronjob daily that backs up specific folders as GPG encrypted tars to my Mega account.

Well, FFS is up and running. I learned that homegroups are worthless so I had the headache of advanced permissions, etc... The issue now is the slow speeds, I'm getting between 5 and 50 MB/sec. Any thought on where the bottleneck might be or is it just pissed that I'm pushing 3TB over the network?

My first line of backup is manual monthly file copies to external drives in a hot swap dock.

My second line of backups are scheduled to run every night automatically.

http://www.2brightsparks.com/download-syncbackfree.html

I have a 3TB internal backup drive. On the first night SyncBackFree backs up C:/Windows, on the next night I backup D:/Stuff (main storage) and on the third night my G:/Games and S:/Steam gets backed up.

On top is the run scheduler below that is the pop-up browser log after every backup, on the right is the 'what files to backup' filter.

sorry for my late response nearly overlooked it

dunno exactly. I'll just expect that this is over a normal gigabit network

5-50mbps sounds pretty reasonable. You have to remember that you only get top performance when copying large files. computers/hardddrives are only able to write a certain amount of files per second so there is one bottleneck.

I also sometimes experience lower speeds when using ffs. I normally just do the initial copy manually and then use ffs to update/sync where the small performance hit is a small price to pay for the comparison and automation functionality.

on my network, i never get more than 80mbps and with gigabit Ethernet you are limited to a theoretical maximum of 125mbps.

so my advice would be to do the initial copy manually and then don't worry about it because when are you gonna move more than a few Gigabytes at a time
Here is a link to a very interesting tomshardware Article regarding this topic

I like this sratergy. Positrons post . Very like my own regular backups to offline disks, simple and has been reliable for me.

I now use Linux as my main OS and use Grsync and sometimes for fun go command line rsync.

I spin up my backup disks every once in a while and do a no destructive badblock scan to keep the disk from sitting stagnent for a long time.

I also re-copy stuff back and forth to spare disks to hopefully stop the worst of "Bitrot" .

I do need to get off site backup as have not done that yet.. ( I know should do it ).

I don't get a lot to keep thats irreplacable so this works well for me.

I don't have anything quite special honestly. I just run duplicity on certain directories, and then assign its backups to my gdrive and HDD.

I have two copies of my ZFS. Two servers with the same data.
Backup process is manual though. Either going with some clustered file system or automating the ZFS send and receive is on my list of things to try.

I don't backup my main desktop. It mainly just has projects that are already on github. Everything else can be blown away.

I don't backup Linux server VMs. I instead have scripts to build images unattended. I usually delete and rebuild these VMs just to make configuration changes.

I have a Windows gaming VM that I also don't backup, other than point the Documents location to OneDrive. I have no script to rebuild this, but I do almost no configuration to it so rebuilding would be mostly painless.

So I am attempting to create a backup script using duplicity. Well, I am taking several I have found on the internet and adapting them to my needs.

#!/bin/bash
trace () {
stamp=`date +%Y-%m-%d_%H:%M:%S`
echo “$stamp: $*” >> /run/media/user/seagate_4tb_01/logs/pictures${stamp}.log
}
# Export your GnuPG passphrase to an ENV variable so you don’t have to type it every time
export PASSPHRASE=password
# Identifier for your GnuPG key
GPG_KEY=thekey
# Backups older than this will be removed
OLDER_THAN=”6M”
# The source of your backup, often a local directory
SOURCE=/home/user/Pictures
# The destination (relative to the home directory of the user you’re logging in as)
DEST=file:///run/media/user/seagate_4tb_01/backup/pictures
# Check if a full backup is necessary
MODE=
if [ $(date +%d) -eq 1 ]; then
MODE=full
fi;
trace Backup for local filesystem started...
#trace ...removing old backups
# Comment this line (and the one above to keep the log clean) to disable backup removal
#duplicity remove-older-than ${OLDER_THAN} ${DEST} >> /run/media/user/seagate_4tb_01/logs/pictures${stamp}.log 2>&1
#trace ...backing up filesystem
# Full backup run
duplicity \
${MODE} \
--encrypt-key=${GPG_KEY} \
--sign-key=${GPG_KEY} \
${SOURCE} ${DEST} >>/run/media/user/seagate_4tb_01/logs/pictures${stamp}.log 2>&1
# Reset the ENV variable
export PASSPHRASE=

I have been testing it and found that when I add and then remove a file from the backup location it does not get removed from the backup. I have done it a few times, and now the compressed backup files on the external hard drive take up twice as much space as the directory that is being backed up.

How do I fix this?

1 Like