TLDR: Turns out the answer is ZFS padding. Cross post here: New Pool and Data Compression Issues - TrueNAS General - TrueNAS Community Forums
In the process of migrating data to a brand new pool on the latest version of TrueNAS Electric Eel, I found that the data on the receiving end was not always written to the pool as compressed despite the settings being appropriately set.
The destination pool has been built using both TrueNAS BSD 13.0u6.4 and Scale (ElectricEel-24.10.1) with the same results on both.
The destination system has second gen 32 core AMD Epyc and 2x raidz2 vdevs of 6 disks each.
The settings used to create the new dataset in the webGUI under Scale are:
Dataset Preset: SMB
Quotas are left at default.
Encryption: On (Inherit)
Sync: Inherit (Standard)
Compression Level: Inherit (ZSTD-19)
Enable Atime: Inherit (Off)
ZFS Dedupe: Inherit (Off)
Case Sensitivity: Insensitive
Checksum: Inherit (On)
Read-Only: Inherit (Off)
Exec: Inherit (On)
Snapshot Directory:
Snapdev: Inherit (Hidden)
Copies: 1
Record Size: : Inherit (1MiB) – I have tried 1MiB, 4MiB, and 16MiB with the same results
ACL Mode: Restricted
Metadata (Special) Small Block Size: Inherit (0)
The original pool was originally built on TrueNAS 11.x or early 12.x with similar settings as above. The origin pool is currently a 4x disk raidz1 on ElectricEel-24.10.1 and the ZFS flags have not been updated since before migrating to Scale. The method of transfer is a Teracopy file transfer on windows over SMB via a 100Gb Ethernet link. One of the hopes of doing it this way was to obtain better over all compression for all of the data. I am open to using other tools (incl. zfs send/recv) to achieve the desired result.
The data in question is is a mix of audio, video, text files, office docs, executables, archives, etc… Data on the origin pool manages to have better compression than on the new pool with the same settings in the webgui. Data on the destination pool is often larger than the same exact data on the origin pool.
An example directory of 24 files totaling out to 9.41GB would be written out to the origin pool as 9.13GB and on the destination pool as 9.40GB.
In an effort mitigate this issue, several settings have been applied to no avail:
Under “Init/Shutdown Scripts” > Pre-Init Command
echo 0 > /sys/module/zfs/parameters/zstd_earlyabort_pass
and/or
echo 1048576 > /sys/module/zfs/parameters/zstd_abort_size
or
echo 0 > /sys/module/zfs/parameters/zstd_abort_size
CPU usage would ramp up, but the data would still be written with minimal at best to worse space savings.