ZFS (ZoL) Snapshots Used Data Showing Inaccurate Results?

I recently migrated my FreeNAS to Ubuntu 18.04 LTS with ZFS on Linux (ZoL) v0.7.5 and I have been working with the zfs/zpool commands directly instead of through a GUI. I have a cron script set up to take snapshots the datasets I care about hourly. I also have a “scratch” dataset that does not have snapshots taken. Occasionally, I move data from the scratch dataset to a different dataset on the same zpool, but that does not seem to impact the “used” value on the snapshots.

To confirm this, I copied a 1GB file to my documents dataset from the scratch dataset, waited for the hour to pass, and then checked the snapshots that are using more than 0B. My documents folder isn’t changed much, so there aren’t too many too list.

$ zfs list -r -t snapshot -o name,used tank/documents | grep -v 0B 
NAME                                                 USED
tank/documents@autosnap_2018-05-30_01:07:38_daily    1.24M
tank/documents@autosnap_2018-05-30_20:00:02_hourly   192K
tank/documents@autosnap_2018-05-30_21:00:01_hourly   192K
tank/documents@autosnap_2018-05-30_22:00:01_hourly   115K
tank/documents@autosnap_2018-05-31_20:00:02_hourly   422K
tank/documents@autosnap_2018-05-31_21:00:01_hourly   192K
tank/documents@autosnap_2018-05-31_22:00:01_hourly   128K
tank/documents@autosnap_2018-05-31_23:00:01_hourly   102K

Nothing in that output is even close to the 1GB I was expecting. I also tried manually creating a snapshot after the 1GB file was copied to ensure it’s not a quirk with the script, but it’s the same result. Can anyone explain this? Maybe @wendell?

That’s the beauty of snapshots: They don’t copy any data, instead they merely retain it as long as it is referenced by at least one snapshot.

Since the 1GB file still exists on the actual dataset it is attributed to said dataset. There is no need for the snapshot to store another copy of it, so it doesn’t take up any space. Only once you delete the file will it be attributed to the snapshot.

1 Like

Thanks! That perfect sense. I have clearly been working on this NAS migration for too long and am mixing things up.