ZFS question (space)

I am new to ZFS - introduced to it with proxmox… I have some questions. What does this all mean - why is so much used…

> root@pve3:~# zfs list -o space
> NAME                         AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
> Largepool                    5.67T  27.9T     6.65T   15.0T             0B      6.20T
> Largepool/subvol-111-disk-0  1.80T  6.19T     3.09T   3.10T             0B         0B

  pool: Largepool
 state: ONLINE
  scan: scrub in progress since Sun Aug  8 00:24:11 2021
        28.4T scanned at 613M/s, 27.9T issued at 603M/s, 36.6T total
        0B repaired, 76.21% done, 0 days 04:12:30 to go
config:

        NAME                                 STATE     READ WRITE CKSUM
        Largepool                            ONLINE       0     0     0
          raidz2-0                           ONLINE       0     0     0
            wwn-0x5000c500c298f441           ONLINE       0     0     0
            wwn-0x5000c500c2b99718           ONLINE       0     0     0
            wwn-0x5000c500c29f5ad2           ONLINE       0     0     0
            wwn-0x5000c500c29f64bb           ONLINE       0     0     0
            wwn-0x5000c500c298c6f4           ONLINE       0     0     0
            wwn-0x5000c500c29f4e09           ONLINE       0     0     0
            ata-ST5000LM000-2AN170_WCJ25158  ONLINE       0     0     0
            ata-ST5000LM000-2AN170_WCJ230Z5  ONLINE       0     0     0
            ata-ST5000LM000-2AN170_WCJ211K2  ONLINE       0     0     0
            ata-ST5000LM000-2AN170_WCJ22GPE  ONLINE       0     0     0
3 Likes

so 15T of data in the root of the pool, then 3T of data in it’s child dataset.
the child also has 3T of snapshots (or 3T of deletions/changes since the oldest snapshot was taken) and the main pool has 6 1/2 T of snapshot data (6T of data deleted / changed since the first snapshot)
my rough calculation is 27 T of space used, but is rounded down a lot?
but that would make your drives 5TB in size, with 4.5 useable data on each drive, plus 2 “parity” drives in raidz2

if you

$ list -t snap -r Largepool

you should get a list of the snapshots that are taking up like 9T of space.

You might, or might not want to prune some of the older ones?

(you can check how much space is recovered with zfs destroy -nv pool@firstshanphotname%lastsnapshotyouwantpruned removing the -nv to actually go ahead with it…)

3 Likes

Yes - I do have a bunch of snapshots… I Keep about 30 or more… I remove them on a regular basis. What is the usedchild?

3 Likes

the subvol-111 is a child of largepool

child datasets are awesome; I seldom mount the parent, and just have a bunch of children, but I make mine myself, instead of proxmox

3 Likes

An FYI in case you aren’t aware of it, Sanoid or Pyznap are great tools to automatically create/prune/send snapshots.

And while ZFS has a root dataset for the pool, it’s best to create your own specific datasets and only use those. This allows for per workload tuning (different record sizes, encryption, compression, etc), and much easier management. It also makes things easier to shuffle around especially when you run low on space.

3 Likes

one can also copy an entire child from one parent to another, keeping snapshots etc, which is just awesome

3 Likes

Thank you - this all makes sense. ZFS is pretty awesome.

2 Likes