New user first post; probably in the wrong place. Any mod please feel free to change the tags and move it somewhere more appropriate so I understand the best place for it in the future for similar style posts. This was originally posted to stack exchange (https://superuser.com/questions/1929562/are-there-any-known-flaws-with-my-benchmarking-of-btrfs-compression-options-for) but I thought I would try somewhere else to see if there was somewhere with more life. I claim authorship except for some good edits by Giacomo1968.
I am trying to determine vaguely optimal BTRFS mount compression options for my particular data set. Rather than jumping in and reformatting my storage multiple times to perform testing with my entire data set, I am testing just the BTRFS filesystem and compression performance using a RAM based block device with a representative data sample.
Are there any potential flaws in my methodology below that would invalidate my results?
Is there anything suspicious about my results that makes you question their validity?
My approach.
- Created an uncompressed tar of a representative sample of my data (8.44GiB tar file) in a tmpfs filesystem /tmp/data.tar
- Created a 10GiB raw file on a tmpfs (in ram filesystem) partitioned into a single partition and mounted as loop0 with the partition appearing as loop0p1
mkdir /tmp/ram
mkdir /tmp/ramfs
sudo mount -t tmpfs none /tmp/ram
dd if=/dev/zero of=/tmp/ram/raw bs=1G count=10
sudo cfdisk /tmp/ram/raw
sudo losetup -P /dev/loop0 /tmp/ram/raw
lsblk -o NAME,SIZE,FSTYPE,PATH
NAME SIZE FSTYPE PATH
loop0 10G /dev/loop0
└─loop0p1 10G /dev/loop0p1
For each mount option tested:
- Freshly format the BTRFS filesystem
sudo mkfs.btrfs -fL test /dev/loop0p1
- Mount it with the test scenario, e.g.
sudo mount -o noatime,nodiratime,compress=zstd:15 /dev/loop0p1 /tmp/ramfs
- Copy the tmpfs test data to /dev/null to encourage it to be in RAM
sudo cp /tmp/data.tar /dev/null
- Copy the test data into the BTRFS file system, timing it and using the elapsed time to calculate an absolute max write throughput unconstrained by the storage medium.
sudo time cp /tmp/data.tar /tmp/ramfs/
- Unmount the filesystem and then remount as readonly without specifying new file compression to try and invalidate any filesystem data cache.
sudo umount /tmp/ramfs
sudo losetup -d /dev/loop0
sudo losetup -P /dev/loop0 /tmp/ram/raw
sudo mount -o noatime,nodiratime,ro /dev/loop0p1 /tmp/ramfs
- Copy the BTRFS file system data to /dev/null, timing it and using the elapsed time to calculate an absolute max read throughput unconstrained by the storage medium.
sudo time cp /tmp/ramfs/data.tar /dev/null
- Unmount the BTRFS file system
sudo umount /tmp/ramfs
After all testing:
- Clean up by unmounting everything and deleting the /tmp/ directories and files used.
- Repeat as desired for averaging.
Throughput is measured as uncompressed data size divided by the reported elapsed time.
My results.
All values rounded down to 2 significant figures
sudo time cp /tmp/data.tar /dev/null consistently gives 9.7 GB/s
| compression option | Write throughput (GB/s) | Read throughput (GB/s) |
|---|---|---|
| compress=zstd:15 | 0.14 | 2.0 |
| compress=zstd:9 | 0.70 | 2.1 |
| compress=zstd:5 | 1.2 | 2.1 |
| compress=zstd:3 | 1.5 | 2.1 |
| compress=zstd:1 | 1.5 | 2.1 |
| compress=no | 1.5 | 2.9 |
CPU overall utilisation never reaches 40% in any test but there are up to 8 SMT threads that approach full utilisation. CPU utilisation is not of concern for me and so not closely monitored.
Second set of results for smaller data set files.
Data files in /tmp/data directory totalling 8.71GiB; average file size ~44KiB; max file size >10MiB.
I can’t cp to /dev/null so using tar instead of cp for those measurments (the reason why I originally used a pre-tarred dataset). Using tar -cf /dev/null /tmp/data for directing data to /dev/null. Using cp -r /tmp/data /tmp/ramfs/ for write throughput measurement. Using tar -cf /dev/null /tmp/ramfs/data for read throughput measurement.
All values rounded down to 2 significant figures
sudo time tar -cf /dev/null /tmp/data consistently gives 15GB/s
| compression option | Write throughput (GB/s) | Read throughput GB/s |
|---|---|---|
| compress=zstd:15 | 0.017 | 3.8 |
| compress=zstd:9 | 0.10 | 3.8 |
| compress=zstd:5 | 0.19 | 3.8 |
| compress=zstd:3 | 0.32 | 3.8 |
| compress=zstd:1 | 0.34 | 3.8 |
| compress=no | 0.34 | 3.8 |
CPU utilisation was always under 8% with no heavily loaded SMT threads.
For reference.
I am using Debian 13 (up to date as of 31/10/2025) on an 3900X desktop with 32GiB ram (~2.5GiB used with one 16 GiB DDR4 3200MT/s stick per channel) before and after testing and a maximum of ~17GiB RAM used during testing. ~1 GiB of swap used (on an NVMe Samsung 970 Evo Plus that is a separate device to my rootfs) before, during and after tests.