Return to Level1Techs.com

Using ZFS on Ubuntu 21.04 (Desktop) for snapshots - Best Practices?

Hi L1 community-

I am looking for ideas and feedback regarding best practices for using ZFS on Ubuntu Desktop (actually a laptop!) for both snapshots and ZFS-send.

Goal

Use ZFS on my Ubuntu 21.04 laptop to perform snapshots and then send the snapshots to my TrueNAS Core NAS.

Primary questions

  1. Is there a well regarded GUI manager for ZFS on Ubuntu?
  2. What data should I include/exclude from my snapshots and ZFS send?

GUI manager

I’ve read about webmin, cockpit, Houston (from 45 drives). These all seem to be directed at the server community. That isn’t a problem per-say, but I’m curious if the community has a favorite for managing ZFS on the desktop. Maybe just stick with the CLI. I’m not that familiar with ZFS on CLI but I’m pretty familiar with ZFS terminology as I’ve been using TrueNAS (Freenas) for a few years.
What does the L1 community do?

What data to snapshot?

I’m not super aware of the linux filesystem. It seems to be something that has evaded my knowledge being a moderate linux user. (Daily user for about 4 years.)

I’m thinking it is probably best to use an “include all except” approach instead of a “include only these items” approach.

As such my thought is to exclude the /mnt and /media directories. I think I will mount SyncThing and Nextcloud into /mnt or maybe create a third root directory. I have no need to backup SyncThing and Nextcloud as these areas are already on my TrueNAS server being backed up offsite. It would be redundant to include them in the backup of my desktop.
Anything else I should exclude?

Thanks all. Any other suggestions surrounding ZFS on Linux/Ubuntu are welcome.

  1. I guess there is not.
  2. I created datasets for the following directories to be backed up
# User directory
/home 
# Games
/var/games
# Webserver 
/var/www
# for GNOME
/var/lib/AccountsService
# for Docker
/var/lib/docker
# for NFS
/var/lib/nfs
# for LXC
/var/lib/lxc
#libvirt
/var/lib/libvirt

I have rpool_$INST_UUID/$INST_ID/ROOT/for my system and rpool_$INST_UUID/$INST_ID/DATA/ for all the folders listed above. I have versioning for the ROOT dataset via bieaz and do backups of the DATA dataset via sanoid.

I’ve heard of Jim Salter’s Sanoid before and have been considering this.
I’m curious- are you running Ubuntu? If so, what version?

At some point after Ubuntu implemented native ZFS support, they also included some automatic snapshots. These auto snapshots seem to be named starting with autozsys_ followed by 6 seemingly random characters.

The first form of zfs-send I tried was pulling the snapshots over to my TrueNAS server. I wasn’t sure if this was the best approach but wanted to try. After trying it, I don’t think it is ideal or even possible due to naming schema issues in TrueNAS.

Do the sanoid snapshots create any issue/conflict with the autozsys system snapshots?

Looks like sanoid + syncoid would be a great combo. Thanks for sharing your usage.

1 Like

No I am not using Ubuntu. I am on Arch. But for my setup’s I follow the instructions on the openzfs page. They have instructions for root on ZFS for certain distributions like Arch. A quick look revealed they seem to have instructions for Ubuntu, too. I usually follow these instruction, sometimes with a few modifications for myself, and then setup Sanoid and Syncoid for my user data. If you only want to use ZFS for data storage you obviously don’t need to setup root in ZFS and can directly use sanoid and syncoid. On my server where I don’t have that much user data I sometimes use btrfs, too. I am not too familar with btrfs, though. On the plus side btrfs has support in many distributions now, so it is a first class citizen, often even during the installation process.

I have sanoid set up to pull incrementals from my main machine to a backup pi, but both run Ubuntu, not sure if there is a native app in Truenas.

From what I’ve experienced, syncoid will copy both it’s own created snapshots, and snapshots created by other means. Meanwhile sanoid will auto prune snapshots it creates, while leaving snapshots created by others alone, (by design.) so the system should not get overloaded by snapshots from sanoid itself, but allows the user to choose which others to remove

If that is what you were asking about naming schemes?

No, my comment around naming schemas is mostly due to an issue/limitation of using TrueNAS to pull the snapshots from the Ubuntu system. See more here- this person explains it quite well

Regardless, i have decided not to use the TrueNAS “replication” feature for this. I think I will stick with Syncoid to send the snapshots to the TrueNAS box.

Right now I am having some issues with Syncoid telling me that the dataset on the remote system does not exist, however when I ssh into it- it very clearly does exist. Browsing around the Syncoid github issues to try to find someone else who has run into the same problem. At this point I doubt it is a problem with Syncoid, I think it is just mildly difficult to setup.

Huh, thanks for that, I didn’t know it was an issue.

I don;t see a way around it till JRS or IX change something in their code, but glad to learn more.

I agree, Sending out from laptop would be a quicker fix in the mean time

1 Like

This ^

^
|

I never used ZFS snapshot to backup an entire OS before, so I can’t comment on that. However, I have 2 backups schemas, depending on the importance of the VM. If I’m running a desktop, I usually only backup /home and maybe manually copy some /etc files, but if the system goes down, it won’t take long for me to get back up and running (mostly because I have a portable setup). The other schema is for times when I need to back an entire OS that would be a hassle to recreate (like, one with lots of custom settings and maybe manually compiled stuff using make and make install or other custom software on it). This didn’t really happen often, but in this case, I backup the whole / and exclude these:

--exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"}

In both schemas, I backup from inside the OS (client) I am backing up. If I need a centralized way of doing stuff, I use SSH from the server to connect to the client and start a backup. I only used rsync for backup and haven’t had any issues with restoring full installs, just used a live environment to copy stuff over to a freshly formatted disk or vdisk, then changed fstab entries (because obviously, with a new disk, unless you did a full dd, it’s going to have different UUIDs). Bonus points, with this, you can clone a physical system and reproduce it in a VM, or you can even change the underlying file system (like, moving from an ext4 to an encrypted LVM-on-LUKS for example).

Personally, I think backing up an entire OS is really wasteful on backup storage space, only unless you have lots of systems with the same OS and you use ZFS deduplication on your backup server. The second schema basically never happens in my homelab.

2 Likes

It is probably a permissions problem. Normally there are two ways to do this. Either you can give the user on the target machine permissions to execute zfs commands as root, for example giving the ssh user the rights via /etc/sudoers, or you can use the zfs allow command to hand out permissions via ZFS integrated permission management. The last options is the one to favor in my opinion! So if you get a message that the snapshot does not exists it seems the ssh user supposed to receive the data does not have permissions to read the snapshots from the parent dataset!

You can use something like zfs allow -dlu yourusername create,receive,destroy,rollback,snapshot,hold,release,mount targetdataset. For more information read the documentation.

Not a permissions problem- a user knowledge problem! I had manually created the Ubuntu datasets on my TrueNAS box thinking I was supposed to. In fact, its the opposite: You must allow Syncoid to create the datasets on the remote box. On TrueNAS I should only create a single dataset, something such as thinkpad-backup and then let Syncoid create the child datasets.

Works quite well now.

The next issue I am running into is TrueNAS is giving me errors about “no incremental base” when replicating from my primary TrueNAS to my offsite TrueNAS box. I believe the issue is related to Syncoid creating datasets on my TrueNAS primary and then TrueNAS primary isn’t replicating these offsite properly. Will dig into it further tomorrow, not sure what the fix is.

2 Likes