My btrfs file system won't resize after adding a disc

My disc array was full, so I added another 18 TB disc. As follows:

# btrfs device add /dev/sdd /mnt
# btrfs balance start --bg /mnt

But df still shows no gain in available space.

# btrfs filesystem df /mnt
Data, RAID1: total=21.63TiB, used=21.54TiB
System, RAID1: total=32.00MiB, used=2.97MiB
Metadata, RAID1: total=24.00GiB, used=22.94GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
# btrfs filesystem usage  /mnt 
Overall:
    Device size:                  60.03TiB
    Device allocated:             43.31TiB
    Device unallocated:           16.72TiB
    Device missing:                  0.00B
    Device slack:                    0.00B
    Used:                         43.13TiB
    Free (estimated):              8.45TiB      (min: 8.45TiB)
    Free (statfs, df):             8.25TiB
    Data ratio:                       2.00
    Metadata ratio:                   2.00
    Global reserve:              512.00MiB      (used: 0.00B)
    Multiple profiles:                  no

Data,RAID1: Size:21.63TiB, Used:21.54TiB (99.59%)
   /dev/sdb        6.37TiB
   /dev/sda        6.36TiB
   /dev/sdc       11.43TiB
   /dev/sdf      940.00GiB
   /dev/sde        4.55TiB
   /dev/sdd       13.63TiB

Metadata,RAID1: Size:24.00GiB, Used:22.94GiB (95.60%)
   /dev/sdb        6.00GiB
   /dev/sda       10.00GiB
   /dev/sdc       13.00GiB
   /dev/sde        4.00GiB
   /dev/sdd       15.00GiB

System,RAID1: Size:32.00MiB, Used:2.97MiB (9.28%)
   /dev/sda       32.00MiB
   /dev/sdd       32.00MiB

Unallocated:
   /dev/sdb        2.72TiB
   /dev/sda        2.72TiB
   /dev/sdc        3.11TiB
   /dev/sdf        2.72TiB
   /dev/sde        2.72TiB
   /dev/sdd        2.72TiB

How can I make use of my new disc?

Probably the balance is still running? What is the output of btrfs balance status?

I assume the new disc is sdf? Probably the used space is slowly growing?

I had to wait for 3,5 days to let it finish balancing. The kernel log says ist finished successfully.

kernel: BTRFS info (device sdb): found 12278 extents, stage: move data extents
kernel: BTRFS info (device sdb): found 12278 extents, stage: update data pointers
kernel: BTRFS info (device sdb): balance: ended with status: 0

/dev/sdd is the 18 TB disc:

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0   9,1T  0 disk 
sdb      8:16   0   9,1T  0 disk 
sdc      8:32   0  14,6T  0 disk 
sdd      8:48   0  16,4T  0 disk 
sde      8:64   0   7,3T  0 disk 
sdf      8:80   0   3,6T  0 disk 

I also tried btrfs filesystem resize dev_num:max /mnt to resize each disc to its maximum capacity.

Everything looks fine to me then, seems the data is distributed across the disks according to their capacities?

So my solution is to buy a second 18 TB HDD?

Why? You have at least 8TB free?

The discrepancy between btrfs usage and btrfs df is exactly my problem. The free space shown in usage never shows up in df.
Also see that usage shows data as 99.59 % full.

The interpretation of free space on btrfs is tricky. If you want to really understand it you should read the docs:

https://btrfs.readthedocs.io/en/latest/btrfs-filesystem.html

The 99% used in data does not mean what you think however, it means that of the blocks allocated for data, they are 99% used. So it’s actually good to have this quite high, but also expected if you just did a full balance.

In any case the unallocated space is still free.

1 Like

Thanks, now I get it. The btrfs df command totally mislead me.