I’m working on creating a set of ZFS tunables optimized for NVMe-based pools, including both NVMe-only and mixed configurations (NVMe + SSDs/HDDs).
These tunables will be part of Step 25: Tune ZFS Module Parameters (optional) in my “Setting Up TrueNAS Before Using Docker Stacks” guide:
Are there any ZFS experts here who could help develop these tunables? Your input would be greatly appreciated!
EDIT: As I said in this other thread: “My goal here is to create a way that allows “everybody” to get the best performance out of ZFS… even without having technical knowledge (but learning along the way)”
This changes quite a bit between versions, I had a few tuneables I needed to set in 2.2 but now on 2.3 the only thing Ive found that needed to be tweaked in the module is
I want to create tunables for different configurations, like I put in my guide:
Step 25: Tune ZFS Module Parameters (optional) - Ensure that ZFS is optimized for your high-performance storage (e.g.: Optane, NVMe): # Maximizes IOPS and reduces I/O bottlenecks
1. Navigate to "System > Shell" in the TrueNAS interface.
2. Copy and paste the following command into the TrueNAS SHELL: # Choose only the corresponding layout of your pools configuration
* NVMe-only pools
midclt call system.advanced.update '{"kernel_extra_options": "zfs_vdev_def_queue_depth=128 zfs_dmu_offset_next_sync=0 zfs_vdev_async_read_max_active=12 zfs_vdev_max_active=4096"}'
* NVMe pools + SSD pools
midclt call system.advanced.update '{"kernel_extra_options": ""}'
* NVMe pools + SSD pools + HDD pools
midclt call system.advanced.update '{"kernel_extra_options": ""}'
* NVMe pools + HDD pools
midclt call system.advanced.update '{"kernel_extra_options": ""}'
WARNING: If you previously configured Step 24, combine those options with these in a single command to avoid overwriting settings!
3. If needed, refer to the official OpenZFS documentation for detailed guidance on module parameters. # https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html
Nvme only but that meta slab parameter should be the same on nvme and SSD. It’s only useful to have enabled on spinning disks afaik.
I should also mention the reason I set arc max is I have 768gb of ram in the machine and by default arc will try to grab 100’s of gb for arc. This leads to a bunch of problems so limiting it to under 20gb is a good perf / usage balance for my setup.
Step 25: Tune NVMe Driver (optional) - Ensure that the NVMe driver is optimized for your high-performance storage (e.g.: Optane, NVMe): # Improves performance by increasing poll and write queues
1. Navigate to "System > Shell" in the TrueNAS interface.
2. Copy and paste the following command into the TrueNAS SHELL:
midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=8 nvme.write_queues=8"}'
WARNING: If you previously configured Step 24, combine those options with these in a single command to avoid overwriting settings!
Based on that, and @wendell 's command, I’ve change the instructions to this:
Step 25: Offload RCU Callbacks (optional) - Offload RCU (Read-Copy-Update) callbacks from CPU cores to kernel threads: # Reduces latency and improves performance, especially on high-core-count systems
1. Navigate to "System > Shell" in the TrueNAS interface.
2. Copy and paste the following command into the TrueNAS SHELL: # Replace '63' with the number of logical CPU cores in your system minus 1 (e.g.: 64 threads - 1 = 63)
midclt call system.advanced.update '{"kernel_extra_options": "rcu_nocbs=0-63"}'
WARNING: If you previously configured Step 24, combine those options with these in a single command to avoid overwriting settings!
3. If needed, copy and paste the following command into the TrueNAS SHELL to find the number of logical CPU cores in your system:
lscpu | grep '^CPU(s):'
Step 26: Tune NVMe Driver (optional) - Ensure that the NVMe driver is optimized for your high-performance storage (e.g.: Optane, NVMe): # Improves NVMe performance by enabling polling and optimizing queue usage
1. Navigate to "System > Shell" in the TrueNAS interface.
2. Copy and paste the following command into the TrueNAS SHELL: # Replace '32' with the number of physical CPU cores in your system
midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=32 nvme.write_queues=8 nvme.io_poll=1 nvme.io_poll_delay=0 nvme_core.io_timeout=2 max_host_mem_size_mb=512"}'
WARNING: If you previously configured Step 24 and/or Step 25, combine those options with these in a single command to avoid overwriting settings!
3. If needed, copy and paste the following command into the TrueNAS SHELL to find the number of physical CPU cores in your system:
lscpu | grep 'Core(s) per socket'
old kernels were kinda dumb about poll queues. I think now you get one per CPU core which is fine until you have gobs of CPUs then it seems overkill unless you also have gobs of nvme
I’ve modified this to address all three scenarios, but I’m still unsure how to handle Optane + NAND:
Step 26: Tune NVMe Driver (optional) - Ensure that the NVMe driver is optimized for your high-performance storage (e.g.: Optane, NVMe): # Improves NVMe performance by optimizing queue usage
1. Navigate to "System > Shell" in the TrueNAS interface.
2. Copy and paste the following command into the TrueNAS SHELL: # Choose only the configuration that matches your NVMe hardware layout
* Optane-only
midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=4 nvme.write_queues=4 max_host_mem_size_mb=512"}'
* Optane + NAND # Replace '16' with the number of physical CPU cores in your system divided by 2 (e.g.: 32 cores / 2 = 16)
midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=4 nvme.write_queues=16 max_host_mem_size_mb=512"}'
* NAND-only # Replace '16' with the number of physical CPU cores in your system divided by 2 (e.g.: 32 cores / 2 = 16)
midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=16 nvme.write_queues=16 max_host_mem_size_mb=512"}'
WARNING: If you previously configured Step 24 and/or Step 25, combine those options with these in a single command to avoid overwriting settings!
3. If needed, copy and paste the following command into the TrueNAS SHELL to find the number of physical CPU cores in your system:
lscpu | grep 'Core(s) per socket'
EDIT: This is what I ended up with… hopefully I’m on the right track…