Pruning Snapshots in TrueNAS Scale

I thought this would be an easy one, but I must be searching for the wrong thing.

I’m using TrueNAS Scale for the first time on a home server. I have set up a replication task to replicate snapshots from a remote non-TrueNAS system running Ubuntu and ZFS. This part works fine, but I need a way to prune out the snapshots on my TrueNAS box over time. There doesn’t seem to be any way to do this. On my Ubuntu server I use Sanoid to manage the ZFS snapshots, but I don’t think that’s an option for TrueNAS.

Am I missing something simple here? This seems like very basic functionality that I assumed TrueNAS could do.

If the snapshots are managed by truenas then you can set the snapshot lifetime and it will remove old snaps automatically. This is also true for replication tasks.

If you take the snapshots in a script or cron job or something, it won’t delete them.

2 Likes

That seems to be the problem. The source system is not a TrueNAS system. I’m pulling from another system running Sanoid to the TrueNAS system, but TrueNAS is not creating the snapshots. Is there no solution for this situation?

Not supported by truenas, you have to install it yourself. I doubt you’d be the first.

Depending on your security setup, when you set a truenas replication task to pull from a remote system you can set a snapshot life time. Could work?

2 Likes

At least the link you posted, features Sanoid’s author as the response, so looks possible to install?

Even if manually?

The link mentions about having to link a Perl binary though, so not OOTB configured…

Was looking at CORE, not SCALE

Scale

1 Like

Running Sanoid via cron seems like the best option so far, but it seems hacky. There’s a note in the Scale topic there about needing to redo things after a TrueNAS upgrade, which isn’t ideal, but I suppose it’s better than finding an alternative to TrueNAS altogether.

I saw this field for snapshot retention policy, but it doesn’t seem to have many options. I can choose either “None” or “Same as source” and I’m assuming there’s no way for TrueNAS to have a clue what’s going on with the source system since it is not TrueNAS.

Ah, I’m on core and they have a “custom” setting that would only control the snapshots on the truenas box.

Sounds like you just gotta follow the guide :slight_smile:

I found the solution for this so I’m replying to close the loop and leave the details here for others to find.

There was no Custom option for the snapshot retention on my replication job because I did not have a proper naming schema set on the snapshot replication configuration itself. I was just using a regex to have the job replicate all snapshots with (.*) as I didn’t have any reason to suspect anything special from this field.

Since Sanoid produces snapshots with a regular enough naming scheme, I can be more specific about which snapshots I replicate, and this in turn allows the replication job to set a retention time on them and handle that part automatically with no further work.

For reference, I’m using this spec now:

autosnap_%Y-%m-%d_%H:%M:%S_frequently

I am now able to set a snapshot retention of 1 hour on the replication job and it is cleaning them up perfectly well (I have the job running every 5 minutes). I assume that I could also now layer other snapshot jobs on top of the dataset without issue, but I haven’t yet tried that.

I did not find out about this setting restriction in TrueNAS from any documentation, but just by clicking around the menus enough and trying to understand what’s going on. This might be an area of possible improvement for the docs.

3 Likes