I’m trying to back up my NAS ZFS pool. I used to use rsync to do this, and it worked well, however; due to recent changes, I have live VM raw files on there, which results in them being copied, fully, each time. Not ideal
So I’ve moved over to trying to zfs send…To be honest i am not a huge fan. Am I able to send just the live snapshot ? From trying so far, it looks like I would have to have two snapshots…Send the increment and then delete the old one ? It seems liable to break, if something doesn’t work correctly and without lots of error catching code in my bash
I don’t have space to indefinitely keep snapshots & would like to just set it and forget it. Is there any easier way?
What else do you want to send? You either have a snapshot at the target (the older one) and just update with further incremental snapshots or you have to “copy” the entire pool as there is 0 data present on the target.
snapshots don’t use much space. The first one is obviously all the data, everything after that is just the delta. I have like 100 full snapshots on my backup server, everything fine. There is no problem deleting old snapshots if space is a concern. Ripping out huge amounts of referenced blocks out of the pool takes some time though.
I don’t understand. It’s damn easy and insanely fast.
edit: Ok, with incremental snapshots, you need some kind of script so snapshot names get an increment. Check out Syncoid, it’s a script that does that for you. Otherwise TrueNAS has this built into the GUI.
If you don’t like incremental snapshots, delete the target and send everything over again. Certainly easier, but takes ages.
You have really no other option but ZFS send. It’s a block level technology. It only ships the deltas. It is set-and-forget. You can even tune how long you keep the snaps on the other side and make it different if you want. If your deltas change every time, theres something else wrong or you are misunderstanding something.
Most of the alternatives are file-level and that’s absolutely not what you want…
This is not a feature of ZFS. You need to keep track of and delete older snapshots yourself. TrueNAS offers this by default, but scripts can help with that. It’s basically some fancy crontab setting in the end.
It’s very easy to use. It doesn’t deal with send/recv though, just automatically creating and deleting snapshots with proper names. Hourly, daily, weekly, monthly…very nice to always have a 15min old snapshot around. They use 96kb each, so even thousands aren’t a problem.
I send snapshots daily, but setting crontab to replicate every 5min or so shouldn’t be a problem…delta from 5min is minimal. This is how you deal with ransomware too
I wish I had known this the last 3 years or so lol. I’ve been using some crude cron scripts.
Many thanks, will have a look at that. One less thing…
ZoL.
I don’t suppose you guys have an easy way of seeing the health? I’ve been using a script to export a .txt doc, then zenity it from my desktop. I was thinking of changing this over to direct SSH (Which I have now set up due to the whole ZFS send requirement)
Yeah that’s unavoidable. But it’s only going downhill from now on. If everything is automated, it’s amazing to see and watch everything working with no big copy orgy or system load worth mentioning.
If you are backing up machine disk images you should have an independent pool to store the blocks that are used for virtual memory. Those never need to get backed up.
The pool records all changes to the block when when you do a snapshots. Many of those changes to the block are virtual memory. When you restore the VM from backup, you are not doing it in place, but need to boot the computer after restoring it. So the contents of virtual memory don’t matter to you. Since the contents of virtual memory don’t matter, you don’t need to back it up. However the default way that virtual memory is handled on most operating systems is that they create a virtual memory block or partition located on the boot drive. Instead on an independent pool that does not get backed up, allocate blocks that you will assign to your VMs to use for virtual memory, then go into the guest OS, and assign the virtual memory to those blocks.
Next backup of that guest OS will be much smaller.
I worked on one file server that was using a bastardized open solaris and zfs that had daily snapshots from 2008 to 2016. The disk allocated for 8 years of daily snapshots for the primary file server for one university department with 35k undergraduates was 20% larger than the current disk allocation. I am saying that storing zfs snapshots long term should not use a significant amount of disk, nor should the daily diffs be large.
The only thing that would make the daily diffs large is if you were also trying to store all of the virtual memory transactions with the data you care about.