ZFS Module Parameters

Hi,

I’m working on creating a set of ZFS tunables optimized for NVMe-based pools, including both NVMe-only and mixed configurations (NVMe + SSDs/HDDs).

These tunables will be part of Step 25: Tune ZFS Module Parameters (optional) in my “Setting Up TrueNAS Before Using Docker Stacks” guide:

Are there any ZFS experts here who could help develop these tunables? Your input would be greatly appreciated! :wink:

EDIT: As I said in this other thread: “My goal here is to create a way that allows “everybody” to get the best performance out of ZFS… even without having technical knowledge (but learning along the way)” :roll_eyes:

Best regards,
PapaGigas

1 Like

This changes quite a bit between versions, I had a few tuneables I needed to set in 2.2 but now on 2.3 the only thing Ive found that needed to be tweaked in the module is

options zfs metaslab_lba_weighting_enabled=0
options zfs zfs_arc_max=17179869184
1 Like

You have an NVMe-only or SSD-only pool? :roll_eyes:

I want to create tunables for different configurations, like I put in my guide:

Step 25: Tune ZFS Module Parameters (optional) - Ensure that ZFS is optimized for your high-performance storage (e.g.: Optane, NVMe): # Maximizes IOPS and reduces I/O bottlenecks

  1. Navigate to "System > Shell" in the TrueNAS interface.

  2. Copy and paste the following command into the TrueNAS SHELL: # Choose only the corresponding layout of your pools configuration

    * NVMe-only pools

          midclt call system.advanced.update '{"kernel_extra_options": "zfs_vdev_def_queue_depth=128 zfs_dmu_offset_next_sync=0 zfs_vdev_async_read_max_active=12 zfs_vdev_max_active=4096"}'

    * NVMe pools + SSD pools

          midclt call system.advanced.update '{"kernel_extra_options": ""}'

    * NVMe pools + SSD pools + HDD pools

          midclt call system.advanced.update '{"kernel_extra_options": ""}'

    * NVMe pools + HDD pools

          midclt call system.advanced.update '{"kernel_extra_options": ""}'

    WARNING: If you previously configured Step 24, combine those options with these in a single command to avoid overwriting settings!

  3. If needed, refer to the official OpenZFS documentation for detailed guidance on module parameters. # https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html

Best regards,
PapaGigas

Nvme only but that meta slab parameter should be the same on nvme and SSD. It’s only useful to have enabled on spinning disks afaik.

I should also mention the reason I set arc max is I have 768gb of ram in the machine and by default arc will try to grab 100’s of gb for arc. This leads to a bunch of problems so limiting it to under 20gb is a good perf / usage balance for my setup.

1 Like

I am reminded that I do set some options for the nvme driver if you are interested.

options nvme poll_queues=8
options nvme write_queues=8

1 Like

Correct, I’ve added it to the “NVMe-only pools” and “NVMe pools + SSD pools” command, thanks! :wink:

Best regards,
PapaGigas

1 Like

I’ve noticed that @wendell set the nvme.poll_queues to 64 on this thread:

Best regards,
PapaGigas

1 Like

I’ve added one more step to my guide! :wink:

Step 25: Tune NVMe Driver (optional) - Ensure that the NVMe driver is optimized for your high-performance storage (e.g.: Optane, NVMe): # Improves performance by increasing poll and write queues

  1. Navigate to "System > Shell" in the TrueNAS interface.

  2. Copy and paste the following command into the TrueNAS SHELL:

     midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=8 nvme.write_queues=8"}'

     WARNING: If you previously configured Step 24, combine those options with these in a single command to avoid overwriting settings!

Best regards,
PapaGigas

2 Likes

I would defer to Wendell’s expertise but I would suspect that number probably is influenced by the cpu core topology.

1 Like

Based on that, and @wendell 's command, I’ve change the instructions to this:

Step 25: Offload RCU Callbacks (optional) - Offload RCU (Read-Copy-Update) callbacks from CPU cores to kernel threads: # Reduces latency and improves performance, especially on high-core-count systems

  1. Navigate to "System > Shell" in the TrueNAS interface.

  2. Copy and paste the following command into the TrueNAS SHELL: # Replace '63' with the number of logical CPU cores in your system minus 1 (e.g.: 64 threads - 1 = 63)

     midclt call system.advanced.update '{"kernel_extra_options": "rcu_nocbs=0-63"}'

     WARNING: If you previously configured Step 24, combine those options with these in a single command to avoid overwriting settings!

  3. If needed, copy and paste the following command into the TrueNAS SHELL to find the number of logical CPU cores in your system:

     lscpu | grep '^CPU(s):'

Step 26: Tune NVMe Driver (optional) - Ensure that the NVMe driver is optimized for your high-performance storage (e.g.: Optane, NVMe): # Improves NVMe performance by enabling polling and optimizing queue usage

  1. Navigate to "System > Shell" in the TrueNAS interface.

  2. Copy and paste the following command into the TrueNAS SHELL: # Replace '32' with the number of physical CPU cores in your system

     midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=32 nvme.write_queues=8 nvme.io_poll=1 nvme.io_poll_delay=0 nvme_core.io_timeout=2 max_host_mem_size_mb=512"}'

     WARNING: If you previously configured Step 24 and/or Step 25, combine those options with these in a single command to avoid overwriting settings!

  3. If needed, copy and paste the following command into the TrueNAS SHELL to find the number of physical CPU cores in your system:

     lscpu | grep 'Core(s) per socket'

Best regards,
PapaGigas

old kernels were kinda dumb about poll queues. I think now you get one per CPU core which is fine until you have gobs of CPUs then it seems overkill unless you also have gobs of nvme

1 Like

But are this instructions still valid? do you have a better suggestion? :roll_eyes:

And can you help with the ZFS Module Parameters for different scenarios? :roll_eyes:

EDIT: Btw, this is my storage layout:

[ Boot ]

Name: boot-pool
Disks: 2 SSD
Layout: 1 x Mirror

[ System / Apps ]

Name: tank
Disks: 7 Optane
Layout: 3 x Mirror + Spare

[ Media / Downloads ]

Name: morpheus
Disks: 2 HDD + 2 NVMe
Layout: 1 x Mirror + Metadata/Small Blocks (NVMe Mirror)

[ Data / Games ]

Name: trinity
Disks: 4 SSD + 2 NVMe
Layout: 1 x RAIDZ1 + Metadata/Small Blocks (NVMe Mirror)

[ Backups ]

Name: neo
Disks: 8 SSD
Layout: 1 x RAIDZ2

Best regards,
PapaGigas

I’ve modified this to address all three scenarios, but I’m still unsure how to handle Optane + NAND:

Step 26: Tune NVMe Driver (optional) - Ensure that the NVMe driver is optimized for your high-performance storage (e.g.: Optane, NVMe): # Improves NVMe performance by optimizing queue usage

  1. Navigate to "System > Shell" in the TrueNAS interface.

  2. Copy and paste the following command into the TrueNAS SHELL: # Choose only the configuration that matches your NVMe hardware layout

    * Optane-only

          midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=4 nvme.write_queues=4 max_host_mem_size_mb=512"}'

    * Optane + NAND # Replace '16' with the number of physical CPU cores in your system divided by 2 (e.g.: 32 cores / 2 = 16)

          midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=4 nvme.write_queues=16 max_host_mem_size_mb=512"}'

    * NAND-only # Replace '16' with the number of physical CPU cores in your system divided by 2 (e.g.: 32 cores / 2 = 16)

          midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=16 nvme.write_queues=16 max_host_mem_size_mb=512"}'

     WARNING: If you previously configured Step 24 and/or Step 25, combine those options with these in a single command to avoid overwriting settings!

  3. If needed, copy and paste the following command into the TrueNAS SHELL to find the number of physical CPU cores in your system:

     lscpu | grep 'Core(s) per socket'

EDIT: This is what I ended up with… hopefully I’m on the right track… :roll_eyes:

Best regards,
PapaGigas

Do you have a 16 core CPU? :roll_eyes:

Best regards,
PapaGigas

32/64 c/t

1 Like

I’ve changed the calculation:

* Optane-only

     midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=2 nvme.write_queues=2 nvme.io_queue_depth=16 nvme.use_threaded_interrupts=1 nvme.max_host_mem_size_mb=512"}'

* Optane + NAND

     midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=2 nvme.write_queues=2 nvme.io_queue_depth=64 nvme.use_threaded_interrupts=1 nvme.max_host_mem_size_mb=512"}'

* NAND-only # Replace '8' with the number of physical CPU cores in your system divided by 4 (e.g.: 32 cores / 4 = 8) in 'nvme.poll_queues' and 'nvme.write_queues'

     midclt call system.advanced.update '{"kernel_extra_options": "nvme.poll_queues=8 nvme.write_queues=8 nvme.io_queue_depth=256 nvme.use_threaded_interrupts=1 nvme.max_host_mem_size_mb=512"}'

PS - I’m still unsure how to handle Optane + NAND! :roll_eyes:

Best regards,
PapaGigas