The system has 32GB of RAM, I don’t think I’ve seen it more than half used.
By default ZFS only uses up to half the ram available. This can be changed to whatever you like. Note that the system asking ZFS to free up ram is very slow, and sometimes not fast enough to prevent and out of memory error, so give your system plenty of breathing room. Honestly 16GB is more than enough for that pool size.
Double check these commands, but you can change max ARC the following ways
# To get the current ARC size and various other ARC statistics, run this command:
cat /proc/spl/kstat/zfs/arcstats
# Changes max ARC usage immediately, though it may not show up immediately. Lasts only until reboot
echo "34359738368" > /sys/module/zfs/parameters/zfs_arc_max
# Permanently changes max ARC usage, but only occurs the next reboot.
echo "options zfs zfs_arc_max=34359738368" >> /etc/modprobe.d/zfs.conf
If I reformat this, what’s the best layout for ZFS these days?
This come down to personal opinion, but the answer for me is for 6 or less disks, mirrors are best. After that have 6 wide RAIDZ2 vdevs. I don’t like RAIDZ1 (or RAID5).
A slightly more complicated answer is RAIDZ sacrifices performance for what can sometimes be considerably less space efficiency than you might think. Because of how RAIDZ and blocks work, small blocks (4K and 8K, and to a lesser extent 16K) can actually take up 2x their space on RAIDZ1, the same as a mirror. Likewise RAIDZ2 amplifies space occupied by 3 times like a triple mirror. But for all of these performance is going to be terrible compared to mirrors. Things are generally fine past 32K though, and honestly most of your storage is going to generate much larger blocks. Unless you are using a ZVOL on RAIDZ, in which case you’ve made a terrible mistake.
The long complicated answer is here:
Record sizes?
For general NAS storage, set this to 1M or 4M recordsizes. You are most likely writing once, and accessing rarely. Going higher generally doesn’t give much benefit unless you hoard video like me.
If you have an active dataset, like for torrents, logs, databases, then consider leaving it at the default 128K, though 64K should be fine too. It’s normally been suggested to match recordsizes with write sizes (like the cluster_size of a database). However, because of how ZFS works, having a slightly larger recordsize trades a little performance loss now, for greatly reduced free space fragmentation (and thus saved performance) later. The performance degradation from free space fragmentation isn’t something that shows up in traditional quick and stupid benchmarks, though it also doesn’t really become pathologically until the pool is getting close to full and is more of an enterprise concern. Also note that some free space fragmentation is not only expected but a good thing, it’s only a problem when it becomes high and consists mostly of small blocks.
Against, double check these commands
# Immediately changes max recordsize to 16M until reboot.
echo "16777216" > /sys/module/zfs/parameters/zfs_max_recordsize
# Permanently changes max recordsize to 16M
echo "options zfs zfs_max_recordsize=16777216" >> /etc/modprobe.d/zfs.conf
# Actually sets the recordsize on your pool root or datasets
zfs set recordsize=4M your_pool/your_dataset
In order to change the block sizes of data already on a pool, you’ll have to copy thing to a new dataset with rsync or something (and verify everything is there). Doing a send/recv doesn’t change block sizes.
Compression?
Leave at LZ4, which is default. The most space consuming items on your NAS are going to be pictures or video, which are already compressed.
Apparently I created this pool back on January 9 2016, I don’t think I’ve done much maintenance since then.
I generally set the following on my root pool so every new dataset inherits these.
- atime=off
- relatime=off
- recordsize=4M
The following may possibly have non-linux ZFScompatibility issues. Not sure what the state of things are, but if you’re on linux you generally want them.
- xattr=sa
- dnodesize=auto
- acltype=posixacl
Also, you should ALWAYS manually create a new pool with ashift=12 unless you have proof otherwise. Some drives lie and ZFS will default to the reported physical size, even if it’s fucking stupid. Like ashift=9 (512 sectors).
So use:
zpool create -o ashift=12 ...
You can also set the following to make it automatic in case you forget.
# Immediate setting of a minimum value of 4096 byte sectors (ashift=12) for automatic ashift
echo "12" > /sys/module/zfs/parameters/zfs_vdev_min_auto_ashift
# Permanent setting of a minimum value of 4096 byte sectors (ashift=12) for automatic ashift
echo "options zfs zfs_vdev_min_auto_ashift=12" >> /etc/modprobe.d/zfs.conf
Let me know if there are any errors.