Btrfs Compression Considerations

I have a icsci mount with btrfs on top. It has 3 subvolumes for my 3 data sets, which are: home photos/videos, media (books, games, movies, shows, music) and persistent homelab data. If I turn on compression, are their any considerations I should make besides perhaps aligning compression type with the data set for maximum compression? Should I not compress certain things?

I do incremental backups to gdrive with Duplicity which has compression by default. Problems there? I would assume Duplicity would think the newly compressed data is “new” so I should do a new full backup and increment from there.

photos/videos/shows/music … won’t really compress much. books are small - maybe just defrag the books directory to compress them once in a while.

It works well for binaries / code / random user data.

  1. if you boot from iscsi and have an old bootloader, maybe it won’t support zstd

  2. as per:
    Compression - btrfs Wiki

There is a simple decision logic: if the first portion of data being compressed is not smaller than the original, the compression of the file is disabled – unless the filesystem is mounted with -o compress-force . In that case compression will always be attempted on the file only to be later discarded.

basically, if you have a subvolume that is 50% video / 50% text - no harm; if it’s 90/10 … maybe it becomes wasteful.


fwiw I’m using compress=zstd in my fstab for my / (root) and this shows up as:

$ grep compress /proc/mounts
/dev/sda3 / btrfs rw,noatime,compress=zstd:3,space_cache,autodefrag,subvolid=5,subvol=/ 0 0

which then works out to:

$ sudo compsize -x /
Processed 309136 files, 211128 regular extents (214273 refs), 167370 inline.
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       58%       12G          20G          20G
none       100%      8.6G         8.6G         8.5G
zlib        38%      8.9M          23M          23M
zstd        28%      3.3G          11G          12G
prealloc   100%       16M          16M          61M
1 Like

Oh good consideration regarding the “first Portion” bit. Makes my mismatched datasets possibly behave differently than I would expect. Thanks for that. I guess I’ll copy over some data and run some tests with it. Overall it sounds like I won’t hurt anything, just that something’s won’t compress well.

you can try compressing some data even if you don’t have btrfs mounted with compression options, … see how it goes … if you don’t like it you can uncompress it.

sudo btrfs fi defrag -c zstd -r /books should just work regardless of mount options.

You can use chattr +c or chattr +X on a directory see man 5 btrfs and man chattr.

One thing to keep in mind is the interaction between compression and cow, compression basically forces copy-on-write and it force enables checksumming .

1 Like