Looking for partner in crime for testing NVMe + ZFS idea

I have not found an NVMe server/JBOF I like AND can afford, but I am curious if anyone has one floating around I could use or if someone is willing to test something for me.

I am still digging into everything, but it looks like one of the more fun and basic/easy wins is to leverage namespaces.

iirc people/vendors have been having good luck/results/cus satisfaction with 5 disk raidz1s in the field, so I will go off of that assumption for now.

5 NVMe disks with 5 = size name spaces each, setup in 5 vdevs. Use none as the default NVMe IO scheduler.

I would love to see an A/B of a single namespace vs the below layout.

There is also Zoned Namespaces (ZNS), but I have not looked deeply into that yet, or if ZFS has a facility for it yet.

/dev/nvme0n1 > raidz1-0
/dev/nvme0n2 > raidz1-1
/dev/nvme0n3 > raidz1-2
/dev/nvme0n4 > raidz1-3
/dev/nvme0n5 > raidz1-4

/dev/nvme1n1 > raidz1-0
/dev/nvme1n2 > raidz1-1
/dev/nvme1n3 > raidz1-2
/dev/nvme1n4 > raidz1-3
/dev/nvme1n5 > raidz1-4

/dev/nvme2n1 > raidz1-0
/dev/nvme2n2 > raidz1-1
/dev/nvme2n3 > raidz1-2
/dev/nvme2n4 > raidz1-3
/dev/nvme2n5 > raidz1-4

/dev/nvme3n1 > raidz1-0
/dev/nvme3n2 > raidz1-1
/dev/nvme3n3 > raidz1-2
/dev/nvme3n4 > raidz1-3
/dev/nvme3n5 > raidz1-4

/dev/nvme4n1 > raidz1-0
/dev/nvme4n2 > raidz1-1
/dev/nvme4n3 > raidz1-2
/dev/nvme4n4 > raidz1-3
/dev/nvme4n5 > raidz1-4

What do you try to “win” by using namespaces? I see that you “win” a lot of work (setting up namespaces)

https://www.kernel.org/doc/html/latest/filesystems/zonefs.html

You didn’t mention what nvme drives you have / planning to use. But it’s likely that they don’t support ZNS.

The OS assigns 2 threads per disk, in NVMe land, each namespace is treated as an independent disk, so to take advantage of the large amount of parallelism in NVMe vs what a normal kernel/file system is going to do, we can present each physical disk/controller as several namespaces, think of it like having 5 ssds behind a raid controller/HBA.

Here, this is a decent overview of how this works and a bit of what I am thinking about. https://www.youtube.com/watch?v=7MYw-0qfpH8


page 10 https://www.snia.org/sites/default/files/SDCEMEA/2020/3%20-%20Javier%20Gonzalez%20Zoned%20namespacese.PDF

If I had the NVMe disks and a server I could use I would not be looking for someone to partner with, but preferably enterprise NVMe.

I might be able to assist in a week or two (note that I’m an absolute beginner regarding ZFS so would need idiot-proof guidances):

  • Am in the process of looking at AM5 motherboard AMD “chipset” software RAID with 6 Samsung 990 PRO SSDs that support NVMe 2.0 (for whatever that’s worth, 4 x 2 TB, 2 x 4 TB models). Each SSD gets 4 PCIe Gen4 lanes directly from the CPU.

  • When that’s done I can play with these SSDs for a while, but only in AM4 motherboards (likely an ASUS Pro WS X570-ACE with a 5950X) where due to the AM4 platform of course only 5 NVMe SSDs can be operated at full speed with native CPU PCIe Gen4 x4 interfaces.

  • Don’t have any PCIe Gen5 NVMe SSDs yet since I currently consider them to be useless due to still having stronger disadvantages than advantages (one of my maxims is always to increase the number of physical drives to have redundancy instead of faster individual drives).

This makes a lot of sense to me, and it seems like a lot of software that interfaces with storage need to be reconsidered because it probably makes assumptions based on the previous era of HDD, for example y-cruncher recently adding tuning values that have different defaults for HDD vs SSD.

The quote below from this User Guides - Swap Mode seems like it implements a very similar idea

When the lane multiplier is larger than 1, the framework will treat each path as if they were multiple independent drives and will stripe them accordingly. Accesses will then be parallelized across the lanes resulting in parallel access to the same path. … The motivation here is for drives that require I/O parallelism and high queue depth to achieve maximum bandwidth. (namely SSDs)

Also below are some other potentially interesting quotes about other SSD optimizations y-cruncher employs

Workers/lane is 2 for SSDs because SSDs require I/O parallelism (and a queue depth) to achieve the high bandwidth. … Each worker consists of an I/O buffer and a thread. … Increasing the # of workers increases the amount of I/O parallelism at the cost of higher CPU overhead and less sequential access. Some SSD-based arrays will find that 4 workers/lane to be better than the default of 2.

For SSDs, it is quite easy to saturate memory bandwidth from just I/O. … If this is the case, the buffer size should be small enough to fit comfortably in the CPU cache … Otherwise, a larger buffer size (> 8 MB) may be beneficial to reduce OS API overhead.

Thanks for the link. I was looking for “official” documentation in this regard.

In the presentation, namespaces were presented as different from partitions, but in my experience I see the same performance lift using partitions as with using namespaces.

I think the main benefit of namespaces over partitions is that they appear as separate devices, which is important for applications that only accept devices (not partitions) as input, such as VMware.

I have been using nvme partitions in zfs vdevs for a couple of years. However, I use nvme devices only as accelerators for HDD based pools.
In my tests, configuring 3-4 partitions as special or l2arc vdevs yield optimal performance.
To increase redundancy, your proposed scheme works well.

So, for my use cases I cannot see a performance difference between regular partitions and namespaces. However, partitions are more flexible compared to namespaces. Tools such as gparted can dynamically resize partitions without data loss. I am not aware of tools with similar functionality for namespaces. In my home lab I find that change is the only constant. That is typically not the case in enterprise use cases.

You’ll find that consumer nvmes don’t expose multiple namespaces (I am not aware of a single one). Intel Optane devices also don’t support multiple namespaces.

There are good discussions around namespaces on this forum. I encourage you to search.

Yes, I have been wondering why y-cruncher by default configures 4 threads. The micron presentation seems to explain this.
In my tests different configurations did not show improved performance.

OK, sorry peeps, things got busy and my fuck farm was barren, here is the test data I got out of a box, but it’s from a VM on an ESXi host with the NVMe disks passed through.

#Koxia CM6 Raid-10 4 namespaces sync standard

root@quantastor:/# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n2     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n3     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n4     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n1     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n2     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n3     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n4     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n1     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n2     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n3     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n4     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n1     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n2     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n3     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n4     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0

root@quantastor:/# zpool status -L
  pool: qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
 state: ONLINE
config:

        NAME                                     STATE     READ WRITE CKSUM
        qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def  ONLINE       0     0     0
          mirror-0                               ONLINE       0     0     0
            nvme0n1                              ONLINE       0     0     0
            nvme0n2                              ONLINE       0     0     0
          mirror-1                               ONLINE       0     0     0
            nvme0n3                              ONLINE       0     0     0
            nvme0n4                              ONLINE       0     0     0
          mirror-2                               ONLINE       0     0     0
            nvme1n1                              ONLINE       0     0     0
            nvme1n2                              ONLINE       0     0     0
          mirror-3                               ONLINE       0     0     0
            nvme1n3                              ONLINE       0     0     0
            nvme1n4                              ONLINE       0     0     0
          mirror-4                               ONLINE       0     0     0
            nvme2n1                              ONLINE       0     0     0
            nvme2n2                              ONLINE       0     0     0
          mirror-5                               ONLINE       0     0     0
            nvme2n3                              ONLINE       0     0     0
            nvme2n4                              ONLINE       0     0     0
          mirror-6                               ONLINE       0     0     0
            nvme3n1                              ONLINE       0     0     0
            nvme3n2                              ONLINE       0     0     0
          mirror-7                               ONLINE       0     0     0
            nvme3n3                              ONLINE       0     0     0
            nvme3n4                              ONLINE       0     0     0
			
root@quantastor:/# zpool status -v
  pool: qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
 state: ONLINE
config:

        NAME                                           STATE     READ WRITE CKSUM
        qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def        ONLINE       0     0     0
          mirror-0                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d201  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d202  ONLINE       0     0     0
          mirror-1                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d203  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d204  ONLINE       0     0     0
          mirror-2                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6802  ONLINE       0     0     0
          mirror-3                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6803  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6804  ONLINE       0     0     0
          mirror-4                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b802  ONLINE       0     0     0
          mirror-5                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b803  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b804  ONLINE       0     0     0
          mirror-6                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a201  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a202  ONLINE       0     0     0
          mirror-7                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a203  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a204  ONLINE       0     0     0

errors: No known data errors

root@quantastor:/# zpool iostat -v
                                                 capacity     operations     bandwidth
pool                                           alloc   free   read  write   read  write
---------------------------------------------  -----  -----  -----  -----  -----  -----
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def        5.36M  6.94T      0     13  18.8K   220K
  mirror-0                                      540K   888G      0      1  2.35K  33.7K
    nvme-eui.00000000000000008ce38ee20af6d201      -      -      0      0  1.17K  16.8K
    nvme-eui.00000000000000008ce38ee20af6d202      -      -      0      0  1.17K  16.8K
  mirror-1                                      884K   888G      0      2  2.35K  32.2K
    nvme-eui.00000000000000008ce38ee20af6d203      -      -      0      1  1.17K  16.1K
    nvme-eui.00000000000000008ce38ee20af6d204      -      -      0      1  1.17K  16.1K
  mirror-2                                     1.23M   888G      0      3  2.35K  35.3K
    nvme-eui.00000000000000008ce38ee20d6b6801      -      -      0      1  1.17K  17.7K
    nvme-eui.00000000000000008ce38ee20d6b6802      -      -      0      1  1.17K  17.7K
  mirror-3                                     1.16M   888G      0      2  2.35K  31.9K
    nvme-eui.00000000000000008ce38ee20d6b6803      -      -      0      1  1.17K  15.9K
    nvme-eui.00000000000000008ce38ee20d6b6804      -      -      0      1  1.17K  15.9K
  mirror-4                                     1.11M   888G      0      2  2.35K  30.7K
    nvme-eui.00000000000000008ce38ee20af7b801      -      -      0      1  1.17K  15.4K
    nvme-eui.00000000000000008ce38ee20af7b802      -      -      0      1  1.17K  15.4K
  mirror-5                                      404K   888G      0      0  2.35K  18.7K
    nvme-eui.00000000000000008ce38ee20af7b803      -      -      0      0  1.17K  9.37K
    nvme-eui.00000000000000008ce38ee20af7b804      -      -      0      0  1.17K  9.37K
  mirror-6                                        8K   888G      0      0  2.35K  14.8K
    nvme-eui.00000000000000008ce38ee20af7a201      -      -      0      0  1.17K  7.39K
    nvme-eui.00000000000000008ce38ee20af7a202      -      -      0      0  1.17K  7.39K
  mirror-7                                       64K   888G      0      1  2.35K  23.1K
    nvme-eui.00000000000000008ce38ee20af7a203      -      -      0      0  1.17K  11.5K
    nvme-eui.00000000000000008ce38ee20af7a204      -      -      0      0  1.17K  11.5K
---------------------------------------------  -----  -----  -----  -----  -----  -----

root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# zpool iostat -v
                                                 capacity     operations     bandwidth
pool                                           alloc   free   read  write   read  write
---------------------------------------------  -----  -----  -----  -----  -----  -----
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def        97.7G  6.84T  8.25K  13.1K  1.03G  1.58G
  mirror-0                                     12.2G   876G  1.04K  1.63K   134M   201M
    nvme-eui.00000000000000008ce38ee20af6d201      -      -    537    835  67.2M   100M
    nvme-eui.00000000000000008ce38ee20af6d202      -      -    532    835  66.6M   100M
  mirror-1                                     12.3G   876G  1.01K  1.65K   130M   203M
    nvme-eui.00000000000000008ce38ee20af6d203      -      -    517    843  64.7M   101M
    nvme-eui.00000000000000008ce38ee20af6d204      -      -    519    843  65.0M   101M
  mirror-2                                     12.1G   876G  1.02K  1.64K   131M   201M
    nvme-eui.00000000000000008ce38ee20d6b6801      -      -    525    839  65.7M   101M
    nvme-eui.00000000000000008ce38ee20d6b6802      -      -    522    838  65.3M   101M
  mirror-3                                     12.3G   876G  1.04K  1.65K   133M   203M
    nvme-eui.00000000000000008ce38ee20d6b6803      -      -    526    842  65.9M   101M
    nvme-eui.00000000000000008ce38ee20d6b6804      -      -    534    842  66.8M   101M
  mirror-4                                     12.2G   876G  1.03K  1.63K   132M   201M
    nvme-eui.00000000000000008ce38ee20af7b801      -      -    531    836  66.4M   100M
    nvme-eui.00000000000000008ce38ee20af7b802      -      -    522    836  65.3M   100M
  mirror-5                                     12.2G   876G  1.04K  1.64K   133M   203M
    nvme-eui.00000000000000008ce38ee20af7b803      -      -    533    842  66.7M   101M
    nvme-eui.00000000000000008ce38ee20af7b804      -      -    533    842  66.7M   101M
  mirror-6                                     12.2G   876G  1.03K  1.64K   132M   202M
    nvme-eui.00000000000000008ce38ee20af7a201      -      -    531    841  66.5M   101M
    nvme-eui.00000000000000008ce38ee20af7a202      -      -    522    840  65.4M   101M
  mirror-7                                     12.3G   876G  1.03K  1.65K   132M   203M
    nvme-eui.00000000000000008ce38ee20af7a203      -      -    534    844  66.8M   102M
    nvme-eui.00000000000000008ce38ee20af7a204      -      -    522    844  65.3M   102M
---------------------------------------------  -----  -----  -----  -----  -----  -----

root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# zfs get all qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4
NAME                                                   PROPERTY              VALUE                                                                     SOURCE
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  type                  filesystem                                                                -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  creation              Fri Apr  5 22:41 2024                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  used                  104K                                                                      -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  available             6.76T                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  referenced            104K                                                                      -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  compressratio         1.00x                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  mounted               yes                                                                       -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  quota                 none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  reservation           none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  recordsize            128K                                                                      local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  mountpoint            /mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  inherited from qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  sharenfs              off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  checksum              on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  compression           on                                                                        inherited from qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  atime                 off                                                                       local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  devices               on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  exec                  on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  setuid                on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  readonly              off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  zoned                 off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  snapdir               hidden                                                                    default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  aclmode               discard                                                                   default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  aclinherit            restricted                                                                default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  createtxg             84                                                                        -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  canmount              on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  xattr                 sa                                                                        local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  copies                1                                                                         default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  version               5                                                                         -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  utf8only              off                                                                       -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  normalization         none                                                                      -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  casesensitivity       sensitive                                                                 -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  vscan                 off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  nbmand                off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  sharesmb              off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  refquota              none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  refreservation        none                                                                      local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  guid                  10999566983837860056                                                      -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  primarycache          metadata                                                                  local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  secondarycache        metadata                                                                  local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  usedbysnapshots       0B                                                                        -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  usedbydataset         104K                                                                      -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  usedbychildren        0B                                                                        -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  usedbyrefreservation  0B                                                                        -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  logbias               latency                                                                   default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  objsetid              266                                                                       -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  dedup                 off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  mlslabel              none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  sync                  standard                                                                  local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  dnodesize             legacy                                                                    default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  refcompressratio      1.00x                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  written               104K                                                                      -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  logicalused           42.5K                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  logicalreferenced     42.5K                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  volmode               default                                                                   default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  filesystem_limit      none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  snapshot_limit        none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  filesystem_count      none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  snapshot_count        none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  snapdev               hidden                                                                    default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  acltype               posix                                                                     inherited from qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  context               none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  fscontext             none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  defcontext            none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  rootcontext           none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  relatime              off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  redundant_metadata    all                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  overlay               on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  encryption            off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  keylocation           none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  keyformat             none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  pbkdf2iters           0                                                                         default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  special_small_blocks  0                                                                         default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  quantastor:shareid    98c3f542-4ade-9cd3-5262-df73044c3f51                                      local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  quantastor:name       R10-NS-4                                                                  inherited from qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4#

root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60

fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60

fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60

fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60

fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60

fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60

fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60

fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60

fiotest: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
fiotest: Laying out IO file (1 file / 102400MiB)
Jobs: 8 (f=7): [f(1),m(7)][100.0%][r=3260MiB/s,w=3282MiB/s][r=3260,w=3281 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=814512: Fri Apr  5 22:47:48 2024
  read: IOPS=3140, BW=3141MiB/s (3293MB/s)(184GiB/60032msec)
    slat (usec): min=175, max=162730, avg=1925.73, stdev=6263.89
    clat (usec): min=3, max=176343, avg=8765.08, stdev=14988.74
     lat (msec): min=2, max=177, avg=10.69, stdev=17.03
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    5], 40.00th=[    5], 50.00th=[    6], 60.00th=[    7],
     | 70.00th=[    8], 80.00th=[   10], 90.00th=[   13], 95.00th=[   17],
     | 99.00th=[  103], 99.50th=[  138], 99.90th=[  157], 99.95th=[  161],
     | 99.99th=[  167]
   bw (  MiB/s): min= 1889, max= 4318, per=99.21%, avg=3115.84, stdev=62.44, samples=959
   iops        : min= 1888, max= 4318, avg=3114.73, stdev=62.46, samples=959
  write: IOPS=3139, BW=3140MiB/s (3292MB/s)(184GiB/60032msec); 0 zone resets
    slat (usec): min=222, max=158229, avg=603.63, stdev=3336.45
    clat (usec): min=4, max=175826, avg=9066.71, stdev=15854.15
     lat (usec): min=611, max=176279, avg=9671.12, stdev=16314.28
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    5],
     | 30.00th=[    5], 40.00th=[    5], 50.00th=[    6], 60.00th=[    7],
     | 70.00th=[    8], 80.00th=[   10], 90.00th=[   13], 95.00th=[   17],
     | 99.00th=[  113], 99.50th=[  140], 99.90th=[  157], 99.95th=[  161],
     | 99.99th=[  167]
   bw (  MiB/s): min= 1750, max= 4452, per=99.22%, avg=3115.52, stdev=67.58, samples=959
   iops        : min= 1749, max= 4452, avg=3113.90, stdev=67.52, samples=959
  lat (usec)   : 4=0.01%, 10=0.01%, 20=0.01%, 750=0.01%
  lat (msec)   : 4=20.36%, 10=60.83%, 20=15.19%, 50=1.64%, 100=0.88%
  lat (msec)   : 250=1.11%
  cpu          : usr=2.84%, sys=25.83%, ctx=870073, majf=0, minf=90
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=188533,188496,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=3141MiB/s (3293MB/s), 3141MiB/s-3141MiB/s (3293MB/s-3293MB/s), io=184GiB (198GB), run=60032-60032msec
  WRITE: bw=3140MiB/s (3292MB/s), 3140MiB/s-3140MiB/s (3292MB/s-3292MB/s), io=184GiB (198GB), run=60032-60032msec
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4#
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=9.85GiB/s][r=10.1k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1092185: Fri Apr  5 22:48:49 2024
  read: IOPS=10.4k, BW=10.1GiB/s (10.9GB/s)(609GiB/60001msec)
    slat (usec): min=139, max=93578, avg=765.51, stdev=318.40
    clat (usec): min=3, max=106659, avg=5389.93, stdev=1078.54
     lat (usec): min=663, max=107514, avg=6156.20, stdev=1180.28
    clat percentiles (usec):
     |  1.00th=[ 3982],  5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4686],
     | 30.00th=[ 4883], 40.00th=[ 5080], 50.00th=[ 5276], 60.00th=[ 5473],
     | 70.00th=[ 5669], 80.00th=[ 5997], 90.00th=[ 6456], 95.00th=[ 6849],
     | 99.00th=[ 7767], 99.50th=[ 8094], 99.90th=[ 8979], 99.95th=[ 9765],
     | 99.99th=[16909]
   bw (  MiB/s): min= 8327, max=13429, per=99.99%, avg=10391.75, stdev=123.22, samples=955
   iops        : min= 8327, max=13424, avg=10391.46, stdev=123.23, samples=955
  lat (usec)   : 4=0.01%, 10=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=1.02%, 10=98.93%, 20=0.04%, 50=0.01%
  lat (msec)   : 100=0.01%, 250=0.01%
  cpu          : usr=0.69%, sys=35.70%, ctx=985727, majf=0, minf=16468
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=623587,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=10.1GiB/s (10.9GB/s), 10.1GiB/s-10.1GiB/s (10.9GB/s-10.9GB/s), io=609GiB (654GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4#
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=4590MiB/s][w=4590 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1768447: Fri Apr  5 22:49:50 2024
  write: IOPS=5921, BW=5922MiB/s (6209MB/s)(347GiB/60004msec); 0 zone resets
    slat (usec): min=170, max=531971, avg=1345.53, stdev=4836.95
    clat (usec): min=2, max=585830, avg=9458.51, stdev=15850.07
     lat (usec): min=1559, max=587140, avg=10804.89, stdev=17211.44
    clat percentiles (usec):
     |  1.00th=[  1745],  5.00th=[  2212], 10.00th=[  2573], 20.00th=[  3752],
     | 30.00th=[  5538], 40.00th=[  7111], 50.00th=[  8586], 60.00th=[  9503],
     | 70.00th=[ 10421], 80.00th=[ 11469], 90.00th=[ 13960], 95.00th=[ 16581],
     | 99.00th=[ 35390], 99.50th=[ 73925], 99.90th=[258999], 99.95th=[367002],
     | 99.99th=[501220]
   bw (  MiB/s): min=  585, max=16351, per=99.65%, avg=5900.78, stdev=327.59, samples=952
   iops        : min=  585, max=16351, avg=5899.75, stdev=327.58, samples=952
  lat (usec)   : 4=0.01%
  lat (msec)   : 2=2.79%, 4=18.65%, 10=43.49%, 20=32.40%, 50=1.95%
  lat (msec)   : 100=0.33%, 250=0.28%, 500=0.10%, 750=0.01%
  cpu          : usr=4.16%, sys=22.58%, ctx=3020362, majf=0, minf=102
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,355330,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=5922MiB/s (6209MB/s), 5922MiB/s-5922MiB/s (6209MB/s-6209MB/s), io=347GiB (373GB), run=60004-60004msec
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4#
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=0): [f(8)][100.0%][r=2108MiB/s,w=2129MiB/s][r=16.9k,w=17.0k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1873839: Fri Apr  5 22:50:51 2024
  read: IOPS=18.2k, BW=2270MiB/s (2380MB/s)(133GiB/60025msec)
    slat (usec): min=13, max=174249, avg=387.27, stdev=1827.04
    clat (usec): min=2, max=179144, avg=1535.25, stdev=4085.79
     lat (usec): min=220, max=179355, avg=1923.11, stdev=4686.60
    clat percentiles (usec):
     |  1.00th=[   396],  5.00th=[   570], 10.00th=[   635], 20.00th=[   791],
     | 30.00th=[   889], 40.00th=[  1004], 50.00th=[  1106], 60.00th=[  1237],
     | 70.00th=[  1418], 80.00th=[  1778], 90.00th=[  2474], 95.00th=[  3097],
     | 99.00th=[  5080], 99.50th=[  8225], 99.90th=[ 55837], 99.95th=[ 98042],
     | 99.99th=[168821]
   bw (  MiB/s): min= 1306, max= 2889, per=98.92%, avg=2245.37, stdev=38.45, samples=953
   iops        : min=10449, max=23116, avg=17961.62, stdev=307.59, samples=953
  write: IOPS=18.2k, BW=2272MiB/s (2383MB/s)(133GiB/60025msec); 0 zone resets
    slat (usec): min=18, max=168467, avg=45.90, stdev=622.28
    clat (usec): min=4, max=179091, avg=1549.52, stdev=4188.37
     lat (usec): min=213, max=179123, avg=1595.62, stdev=4239.95
    clat percentiles (usec):
     |  1.00th=[   400],  5.00th=[   578], 10.00th=[   635], 20.00th=[   799],
     | 30.00th=[   898], 40.00th=[  1004], 50.00th=[  1106], 60.00th=[  1237],
     | 70.00th=[  1418], 80.00th=[  1795], 90.00th=[  2507], 95.00th=[  3097],
     | 99.00th=[  5211], 99.50th=[  8455], 99.90th=[ 56361], 99.95th=[104334],
     | 99.99th=[166724]
   bw (  MiB/s): min= 1315, max= 2937, per=98.92%, avg=2247.70, stdev=39.68, samples=953
   iops        : min=10522, max=23498, avg=17980.22, stdev=317.46, samples=953
  lat (usec)   : 4=0.01%, 10=0.01%, 50=0.01%, 250=0.45%, 500=3.45%
  lat (usec)   : 750=11.51%, 1000=24.31%
  lat (msec)   : 2=43.92%, 4=14.36%, 10=1.56%, 20=0.17%, 50=0.15%
  lat (msec)   : 100=0.07%, 250=0.05%
  cpu          : usr=2.10%, sys=17.81%, ctx=1668735, majf=0, minf=99
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1089974,1091110,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=2270MiB/s (2380MB/s), 2270MiB/s-2270MiB/s (2380MB/s-2380MB/s), io=133GiB (143GB), run=60025-60025msec
  WRITE: bw=2272MiB/s (2383MB/s), 2272MiB/s-2272MiB/s (2383MB/s-2383MB/s), io=133GiB (143GB), run=60025-60025msec
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4#
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=4431MiB/s][r=35.4k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2104002: Fri Apr  5 22:51:51 2024
  read: IOPS=35.6k, BW=4445MiB/s (4660MB/s)(260GiB/60001msec)
    slat (usec): min=10, max=44809, avg=223.25, stdev=62.03
    clat (usec): min=2, max=47081, avg=1575.90, stdev=170.36
     lat (usec): min=224, max=47323, avg=1799.39, stdev=183.19
    clat percentiles (usec):
     |  1.00th=[ 1467],  5.00th=[ 1500], 10.00th=[ 1516], 20.00th=[ 1532],
     | 30.00th=[ 1549], 40.00th=[ 1565], 50.00th=[ 1582], 60.00th=[ 1582],
     | 70.00th=[ 1598], 80.00th=[ 1614], 90.00th=[ 1631], 95.00th=[ 1647],
     | 99.00th=[ 1680], 99.50th=[ 1713], 99.90th=[ 2089], 99.95th=[ 2638],
     | 99.99th=[ 3982]
   bw (  MiB/s): min= 4235, max= 4558, per=99.99%, avg=4443.98, stdev= 6.73, samples=952
   iops        : min=33880, max=36458, avg=35551.73, stdev=53.88, samples=952
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=99.87%, 4=0.11%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=0.91%, sys=14.52%, ctx=2135975, majf=0, minf=2157
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2133429,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=4445MiB/s (4660MB/s), 4445MiB/s-4445MiB/s (4660MB/s-4660MB/s), io=260GiB (280GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4#
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=4966MiB/s][w=39.7k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2105185: Fri Apr  5 22:52:52 2024
  write: IOPS=51.3k, BW=6406MiB/s (6718MB/s)(375GiB/60001msec); 0 zone resets
    slat (usec): min=14, max=500966, avg=152.84, stdev=1025.35
    clat (usec): min=2, max=501275, avg=1094.50, stdev=3141.82
     lat (usec): min=118, max=501320, avg=1247.79, stdev=3420.97
    clat percentiles (usec):
     |  1.00th=[   192],  5.00th=[   210], 10.00th=[   245], 20.00th=[   461],
     | 30.00th=[   685], 40.00th=[   881], 50.00th=[  1029], 60.00th=[  1123],
     | 70.00th=[  1205], 80.00th=[  1319], 90.00th=[  1549], 95.00th=[  1778],
     | 99.00th=[  2900], 99.50th=[  6128], 99.90th=[ 30278], 99.95th=[ 55837],
     | 99.99th=[139461]
   bw (  MiB/s): min=  921, max=19316, per=99.62%, avg=6382.08, stdev=320.91, samples=952
   iops        : min= 7369, max=154530, avg=51055.25, stdev=2567.28, samples=952
  lat (usec)   : 4=0.01%, 250=10.43%, 500=10.97%, 750=11.82%, 1000=13.91%
  lat (msec)   : 2=50.05%, 4=2.15%, 10=0.35%, 20=0.15%, 50=0.12%
  lat (msec)   : 100=0.04%, 250=0.02%, 500=0.01%, 750=0.01%
  cpu          : usr=3.08%, sys=20.78%, ctx=3421517, majf=0, minf=96
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,3075135,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=6406MiB/s (6718MB/s), 6406MiB/s-6406MiB/s (6718MB/s-6718MB/s), io=375GiB (403GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4#
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=37.7MiB/s,w=39.0MiB/s][r=9640,w=9987 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2272104: Fri Apr  5 22:53:53 2024
  read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(2357MiB/60001msec)
    slat (usec): min=3, max=185852, avg=277.91, stdev=1229.83
    clat (usec): min=2, max=191230, avg=2783.17, stdev=4575.31
     lat (usec): min=232, max=192256, avg=3061.47, stdev=4856.45
    clat percentiles (usec):
     |  1.00th=[  1549],  5.00th=[  1713], 10.00th=[  1811], 20.00th=[  1942],
     | 30.00th=[  2040], 40.00th=[  2114], 50.00th=[  2212], 60.00th=[  2311],
     | 70.00th=[  2442], 80.00th=[  2638], 90.00th=[  4178], 95.00th=[  5473],
     | 99.00th=[  8586], 99.50th=[ 11600], 99.90th=[ 67634], 99.95th=[122160],
     | 99.99th=[177210]
   bw (  KiB/s): min=30968, max=48014, per=99.82%, avg=40151.48, stdev=425.08, samples=952
   iops        : min= 7742, max=12003, avg=10037.39, stdev=106.26, samples=952
  write: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(2357MiB/60001msec); 0 zone resets
    slat (usec): min=6, max=178118, avg=509.66, stdev=1608.99
    clat (usec): min=2, max=190150, avg=2789.22, stdev=4609.67
     lat (usec): min=312, max=191361, avg=3299.38, stdev=5066.84
    clat percentiles (usec):
     |  1.00th=[  1549],  5.00th=[  1713], 10.00th=[  1811], 20.00th=[  1942],
     | 30.00th=[  2040], 40.00th=[  2114], 50.00th=[  2212], 60.00th=[  2311],
     | 70.00th=[  2442], 80.00th=[  2671], 90.00th=[  4178], 95.00th=[  5473],
     | 99.00th=[  8586], 99.50th=[ 11600], 99.90th=[ 70779], 99.95th=[122160],
     | 99.99th=[175113]
   bw (  KiB/s): min=32136, max=48624, per=99.79%, avg=40134.80, stdev=416.99, samples=952
   iops        : min= 8034, max=12156, avg=10033.22, stdev=104.24, samples=952
  lat (usec)   : 4=0.01%, 10=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=26.16%, 4=63.24%, 10=9.95%, 20=0.34%, 50=0.17%
  lat (msec)   : 100=0.08%, 250=0.06%
  cpu          : usr=0.88%, sys=12.33%, ctx=2070980, majf=0, minf=109
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=603378,603310,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=2357MiB (2471MB), run=60001-60001msec
  WRITE: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=2357MiB (2471MB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4#
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=207MiB/s][r=53.0k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2684926: Fri Apr  5 22:54:54 2024
  read: IOPS=50.7k, BW=198MiB/s (208MB/s)(11.6GiB/60001msec)
    slat (usec): min=2, max=102985, avg=155.73, stdev=103.13
    clat (usec): min=2, max=106271, avg=1104.97, stdev=367.85
     lat (usec): min=25, max=106627, avg=1260.95, stdev=402.69
    clat percentiles (usec):
     |  1.00th=[  594],  5.00th=[  898], 10.00th=[  930], 20.00th=[  955],
     | 30.00th=[  979], 40.00th=[ 1012], 50.00th=[ 1045], 60.00th=[ 1090],
     | 70.00th=[ 1156], 80.00th=[ 1270], 90.00th=[ 1418], 95.00th=[ 1500],
     | 99.00th=[ 1696], 99.50th=[ 1811], 99.90th=[ 2376], 99.95th=[ 2802],
     | 99.99th=[ 4228]
   bw (  KiB/s): min=158872, max=257242, per=99.94%, avg=202786.89, stdev=1598.34, samples=952
   iops        : min=39716, max=64310, avg=50696.54, stdev=399.60, samples=952
  lat (usec)   : 4=0.01%, 50=0.07%, 100=0.01%, 250=0.03%, 500=0.28%
  lat (usec)   : 750=2.04%, 1000=34.74%
  lat (msec)   : 2=62.65%, 4=0.18%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%, 250=0.01%
  cpu          : usr=1.51%, sys=16.21%, ctx=3110107, majf=0, minf=157
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=3043812,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=198MiB/s (208MB/s), 198MiB/s-198MiB/s (208MB/s-208MB/s), io=11.6GiB (12.5GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4#
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1088MiB/s][w=279k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3321024: Fri Apr  5 22:55:54 2024
  write: IOPS=279k, BW=1089MiB/s (1142MB/s)(63.8GiB/60001msec); 0 zone resets
    slat (usec): min=3, max=35475, avg=27.37, stdev=138.26
    clat (nsec): min=1951, max=35604k, avg=201663.41, stdev=368697.33
     lat (usec): min=12, max=35627, avg=229.15, stdev=393.60
    clat percentiles (usec):
     |  1.00th=[   57],  5.00th=[   79], 10.00th=[   92], 20.00th=[  110],
     | 30.00th=[  121], 40.00th=[  135], 50.00th=[  151], 60.00th=[  169],
     | 70.00th=[  194], 80.00th=[  235], 90.00th=[  420], 95.00th=[  482],
     | 99.00th=[  553], 99.50th=[  594], 99.90th=[ 3720], 99.95th=[ 7242],
     | 99.99th=[22414]
   bw (  MiB/s): min= 1034, max= 1141, per=99.99%, avg=1089.29, stdev= 2.79, samples=954
   iops        : min=264776, max=292194, avg=278857.87, stdev=713.98, samples=954
  lat (usec)   : 2=0.01%, 4=0.01%, 20=0.01%, 50=0.57%, 100=13.28%
  lat (usec)   : 250=67.86%, 500=14.84%, 750=3.24%, 1000=0.05%
  lat (msec)   : 2=0.06%, 4=0.01%, 10=0.07%, 20=0.01%, 50=0.01%
  cpu          : usr=5.55%, sys=56.33%, ctx=7574559, majf=0, minf=78
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,16733654,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1089MiB/s (1142MB/s), 1089MiB/s-1089MiB/s (1142MB/s-1142MB/s), io=63.8GiB (68.5GB), run=60001-60001msec

#Koxia CM6 Raid-10 4 namespaces sync always

root@quantastor:/# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n2     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n3     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n4     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n1     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n2     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n3     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n4     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n1     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n2     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n3     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n4     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n1     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n2     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n3     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n4     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0

root@quantastor:/# zpool status -L
  pool: qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
 state: ONLINE
config:

        NAME                                     STATE     READ WRITE CKSUM
        qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def  ONLINE       0     0     0
          mirror-0                               ONLINE       0     0     0
            nvme0n1                              ONLINE       0     0     0
            nvme0n2                              ONLINE       0     0     0
          mirror-1                               ONLINE       0     0     0
            nvme0n3                              ONLINE       0     0     0
            nvme0n4                              ONLINE       0     0     0
          mirror-2                               ONLINE       0     0     0
            nvme1n1                              ONLINE       0     0     0
            nvme1n2                              ONLINE       0     0     0
          mirror-3                               ONLINE       0     0     0
            nvme1n3                              ONLINE       0     0     0
            nvme1n4                              ONLINE       0     0     0
          mirror-4                               ONLINE       0     0     0
            nvme2n1                              ONLINE       0     0     0
            nvme2n2                              ONLINE       0     0     0
          mirror-5                               ONLINE       0     0     0
            nvme2n3                              ONLINE       0     0     0
            nvme2n4                              ONLINE       0     0     0
          mirror-6                               ONLINE       0     0     0
            nvme3n1                              ONLINE       0     0     0
            nvme3n2                              ONLINE       0     0     0
          mirror-7                               ONLINE       0     0     0
            nvme3n3                              ONLINE       0     0     0
            nvme3n4                              ONLINE       0     0     0
			
root@quantastor:/# zpool status -v
  pool: qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
 state: ONLINE
config:

        NAME                                           STATE     READ WRITE CKSUM
        qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def        ONLINE       0     0     0
          mirror-0                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d201  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d202  ONLINE       0     0     0
          mirror-1                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d203  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d204  ONLINE       0     0     0
          mirror-2                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6802  ONLINE       0     0     0
          mirror-3                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6803  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6804  ONLINE       0     0     0
          mirror-4                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b802  ONLINE       0     0     0
          mirror-5                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b803  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b804  ONLINE       0     0     0
          mirror-6                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a201  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a202  ONLINE       0     0     0
          mirror-7                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a203  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a204  ONLINE       0     0     0

errors: No known data errors

root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# zpool iostat -v
                                                 capacity     operations     bandwidth
pool                                           alloc   free   read  write   read  write
---------------------------------------------  -----  -----  -----  -----  -----  -----
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def        97.7G  6.84T  8.25K  13.1K  1.03G  1.58G
  mirror-0                                     12.2G   876G  1.04K  1.63K   134M   201M
    nvme-eui.00000000000000008ce38ee20af6d201      -      -    537    835  67.2M   100M
    nvme-eui.00000000000000008ce38ee20af6d202      -      -    532    835  66.6M   100M
  mirror-1                                     12.3G   876G  1.01K  1.65K   130M   203M
    nvme-eui.00000000000000008ce38ee20af6d203      -      -    517    843  64.7M   101M
    nvme-eui.00000000000000008ce38ee20af6d204      -      -    519    843  65.0M   101M
  mirror-2                                     12.1G   876G  1.02K  1.64K   131M   201M
    nvme-eui.00000000000000008ce38ee20d6b6801      -      -    525    839  65.7M   101M
    nvme-eui.00000000000000008ce38ee20d6b6802      -      -    522    838  65.3M   101M
  mirror-3                                     12.3G   876G  1.04K  1.65K   133M   203M
    nvme-eui.00000000000000008ce38ee20d6b6803      -      -    526    842  65.9M   101M
    nvme-eui.00000000000000008ce38ee20d6b6804      -      -    534    842  66.8M   101M
  mirror-4                                     12.2G   876G  1.03K  1.63K   132M   201M
    nvme-eui.00000000000000008ce38ee20af7b801      -      -    531    836  66.4M   100M
    nvme-eui.00000000000000008ce38ee20af7b802      -      -    522    836  65.3M   100M
  mirror-5                                     12.2G   876G  1.04K  1.64K   133M   203M
    nvme-eui.00000000000000008ce38ee20af7b803      -      -    533    842  66.7M   101M
    nvme-eui.00000000000000008ce38ee20af7b804      -      -    533    842  66.7M   101M
  mirror-6                                     12.2G   876G  1.03K  1.64K   132M   202M
    nvme-eui.00000000000000008ce38ee20af7a201      -      -    531    841  66.5M   101M
    nvme-eui.00000000000000008ce38ee20af7a202      -      -    522    840  65.4M   101M
  mirror-7                                     12.3G   876G  1.03K  1.65K   132M   203M
    nvme-eui.00000000000000008ce38ee20af7a203      -      -    534    844  66.8M   102M
    nvme-eui.00000000000000008ce38ee20af7a204      -      -    522    844  65.3M   102M
---------------------------------------------  -----  -----  -----  -----  -----  -----

root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# zfs get all qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4
NAME                                                   PROPERTY              VALUE                                                                     SOURCE
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  type                  filesystem                                                                -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  creation              Fri Apr  5 22:41 2024                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  used                  97.7G                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  available             6.67T                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  referenced            97.7G                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  compressratio         1.02x                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  mounted               yes                                                                       -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  quota                 none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  reservation           none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  recordsize            128K                                                                      local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  mountpoint            /mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  inherited from qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  sharenfs              off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  checksum              on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  compression           on                                                                        inherited from qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  atime                 off                                                                       local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  devices               on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  exec                  on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  setuid                on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  readonly              off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  zoned                 off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  snapdir               hidden                                                                    default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  aclmode               discard                                                                   default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  aclinherit            restricted                                                                default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  createtxg             84                                                                        -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  canmount              on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  xattr                 sa                                                                        local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  copies                1                                                                         default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  version               5                                                                         -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  utf8only              off                                                                       -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  normalization         none                                                                      -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  casesensitivity       sensitive                                                                 -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  vscan                 off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  nbmand                off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  sharesmb              off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  refquota              none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  refreservation        none                                                                      local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  guid                  10999566983837860056                                                      -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  primarycache          metadata                                                                  local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  secondarycache        metadata                                                                  local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  usedbysnapshots       0B                                                                        -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  usedbydataset         97.7G                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  usedbychildren        0B                                                                        -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  usedbyrefreservation  0B                                                                        -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  logbias               latency                                                                   default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  objsetid              266                                                                       -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  dedup                 off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  mlslabel              none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  sync                  always                                                                    local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  dnodesize             legacy                                                                    default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  refcompressratio      1.02x                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  written               97.7G                                                                     -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  logicalused           100G                                                                      -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  logicalreferenced     100G                                                                      -
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  volmode               default                                                                   default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  filesystem_limit      none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  snapshot_limit        none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  filesystem_count      none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  snapshot_count        none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  snapdev               hidden                                                                    default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  acltype               posix                                                                     inherited from qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  context               none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  fscontext             none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  defcontext            none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  rootcontext           none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  relatime              off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  redundant_metadata    all                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  overlay               on                                                                        default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  encryption            off                                                                       default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  keylocation           none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  keyformat             none                                                                      default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  pbkdf2iters           0                                                                         default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  special_small_blocks  0                                                                         default
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  quantastor:shareid    98c3f542-4ade-9cd3-5262-df73044c3f51                                      local
qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4  quantastor:name       R10-NS-4                                                                  inherited from qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def

root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60 && fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60 && fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60 && fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60 && fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60 && fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60 && fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60 && fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60 && fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=1462MiB/s,w=1478MiB/s][r=1462,w=1478 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3331533: Fri Apr  5 23:01:46 2024
  read: IOPS=1455, BW=1456MiB/s (1526MB/s)(85.3GiB/60002msec)
    slat (usec): min=200, max=10984, avg=2500.70, stdev=1113.97
    clat (usec): min=4, max=44438, avg=19390.09, stdev=3169.97
     lat (usec): min=4712, max=46267, avg=21891.72, stdev=3461.36
    clat percentiles (usec):
     |  1.00th=[13435],  5.00th=[14877], 10.00th=[15664], 20.00th=[16712],
     | 30.00th=[17433], 40.00th=[18220], 50.00th=[19006], 60.00th=[19792],
     | 70.00th=[20841], 80.00th=[21890], 90.00th=[23725], 95.00th=[25035],
     | 99.00th=[28443], 99.50th=[29754], 99.90th=[32113], 99.95th=[33424],
     | 99.99th=[36439]
   bw (  MiB/s): min= 1142, max= 1749, per=99.94%, avg=1454.86, stdev=14.99, samples=960
   iops        : min= 1142, max= 1748, avg=1454.28, stdev=14.99, samples=960
  write: IOPS=1456, BW=1456MiB/s (1527MB/s)(85.3GiB/60002msec); 0 zone resets
    slat (usec): min=1002, max=16997, avg=2981.70, stdev=978.75
    clat (usec): min=3, max=43871, avg=19066.88, stdev=3187.91
     lat (usec): min=3882, max=49008, avg=22049.52, stdev=3450.49
    clat percentiles (usec):
     |  1.00th=[13042],  5.00th=[14484], 10.00th=[15401], 20.00th=[16450],
     | 30.00th=[17171], 40.00th=[17957], 50.00th=[18744], 60.00th=[19530],
     | 70.00th=[20317], 80.00th=[21627], 90.00th=[23200], 95.00th=[24773],
     | 99.00th=[28181], 99.50th=[29492], 99.90th=[32113], 99.95th=[33817],
     | 99.99th=[36963]
   bw (  MiB/s): min= 1145, max= 1713, per=99.94%, avg=1455.50, stdev=13.99, samples=960
   iops        : min= 1145, max= 1713, avg=1454.91, stdev=13.99, samples=960
  lat (usec)   : 4=0.01%, 10=0.01%
  lat (msec)   : 4=0.01%, 10=0.01%, 20=64.20%, 50=35.78%
  cpu          : usr=1.32%, sys=14.02%, ctx=715394, majf=0, minf=88
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=87346,87384,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=1456MiB/s (1526MB/s), 1456MiB/s-1456MiB/s (1526MB/s-1526MB/s), io=85.3GiB (91.6GB), run=60002-60002msec
  WRITE: bw=1456MiB/s (1527MB/s), 1456MiB/s-1456MiB/s (1527MB/s-1527MB/s), io=85.3GiB (91.6GB), run=60002-60002msec
fiotest: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=10.0GiB/s][r=10.3k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3574260: Fri Apr  5 23:02:46 2024
  read: IOPS=10.3k, BW=10.1GiB/s (10.8GB/s)(604GiB/60001msec)
    slat (usec): min=140, max=6915, avg=771.84, stdev=227.24
    clat (usec): min=3, max=11804, avg=5435.73, stdev=888.33
     lat (usec): min=679, max=13354, avg=6208.38, stdev=989.06
    clat percentiles (usec):
     |  1.00th=[ 3916],  5.00th=[ 4146], 10.00th=[ 4359], 20.00th=[ 4621],
     | 30.00th=[ 4883], 40.00th=[ 5145], 50.00th=[ 5342], 60.00th=[ 5604],
     | 70.00th=[ 5866], 80.00th=[ 6194], 90.00th=[ 6587], 95.00th=[ 7046],
     | 99.00th=[ 7832], 99.50th=[ 8160], 99.90th=[ 8848], 99.95th=[ 9110],
     | 99.99th=[10028]
   bw (  MiB/s): min= 8552, max=13216, per=99.96%, avg=10301.17, stdev=155.18, samples=952
   iops        : min= 8552, max=13216, avg=10300.85, stdev=155.16, samples=952
  lat (usec)   : 4=0.01%, 10=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=1.86%, 10=98.13%, 20=0.01%
  cpu          : usr=0.67%, sys=38.46%, ctx=960301, majf=0, minf=16471
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=618337,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=10.1GiB/s (10.8GB/s), 10.1GiB/s-10.1GiB/s (10.8GB/s-10.8GB/s), io=604GiB (648GB), run=60001-60001msec
fiotest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1544MiB/s][w=1544 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=81609: Fri Apr  5 23:03:47 2024
  write: IOPS=1567, BW=1567MiB/s (1643MB/s)(91.8GiB/60002msec); 0 zone resets
    slat (usec): min=1441, max=14293, avg=5100.34, stdev=334.79
    clat (usec): min=3, max=46699, avg=35730.16, stdev=1765.48
     lat (usec): min=5136, max=52758, avg=40831.29, stdev=1974.46
    clat percentiles (usec):
     |  1.00th=[30540],  5.00th=[32900], 10.00th=[34866], 20.00th=[35390],
     | 30.00th=[35914], 40.00th=[35914], 50.00th=[35914], 60.00th=[35914],
     | 70.00th=[35914], 80.00th=[36439], 90.00th=[36439], 95.00th=[36963],
     | 99.00th=[40109], 99.50th=[41681], 99.90th=[44303], 99.95th=[44827],
     | 99.99th=[45351]
   bw (  MiB/s): min= 1440, max= 1928, per=99.95%, avg=1566.31, stdev= 6.70, samples=953
   iops        : min= 1440, max= 1928, avg=1566.10, stdev= 6.70, samples=953
  lat (usec)   : 4=0.01%, 10=0.01%
  lat (msec)   : 10=0.01%, 20=0.14%, 50=99.84%
  cpu          : usr=1.40%, sys=7.95%, ctx=300632, majf=0, minf=92
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,94029,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1567MiB/s (1643MB/s), 1567MiB/s-1567MiB/s (1643MB/s-1643MB/s), io=91.8GiB (98.6GB), run=60002-60002msec
fiotest: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=829MiB/s,w=829MiB/s][r=6630,w=6630 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=116562: Fri Apr  5 23:04:47 2024
  read: IOPS=6609, BW=826MiB/s (866MB/s)(48.4GiB/60001msec)
    slat (usec): min=24, max=12173, avg=631.00, stdev=198.05
    clat (usec): min=3, max=18383, avg=4240.37, stdev=605.04
     lat (usec): min=422, max=19528, avg=4871.84, stdev=676.06
    clat percentiles (usec):
     |  1.00th=[ 3425],  5.00th=[ 3621], 10.00th=[ 3752], 20.00th=[ 3851],
     | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4228],
     | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5145],
     | 99.00th=[ 6915], 99.50th=[ 7504], 99.90th=[ 8979], 99.95th=[ 9765],
     | 99.99th=[13173]
   bw (  KiB/s): min=750592, max=941844, per=100.00%, avg=846014.03, stdev=4595.26, samples=954
   iops        : min= 5864, max= 7358, avg=6609.38, stdev=35.90, samples=954
  write: IOPS=6602, BW=825MiB/s (865MB/s)(48.4GiB/60001msec); 0 zone resets
    slat (usec): min=336, max=11243, avg=572.85, stdev=141.92
    clat (usec): min=2, max=18063, avg=4239.26, stdev=600.76
     lat (usec): min=523, max=18622, avg=4812.59, stdev=651.82
    clat percentiles (usec):
     |  1.00th=[ 3425],  5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884],
     | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228],
     | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5145],
     | 99.00th=[ 6915], 99.50th=[ 7504], 99.90th=[ 8979], 99.95th=[ 9765],
     | 99.99th=[14222]
   bw (  KiB/s): min=743680, max=954644, per=99.99%, avg=845105.90, stdev=5089.87, samples=954
   iops        : min= 5810, max= 7458, avg=6602.27, stdev=39.76, samples=954
  lat (usec)   : 4=0.01%, 10=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=34.65%, 10=65.30%, 20=0.04%
  cpu          : usr=1.00%, sys=10.16%, ctx=1769743, majf=0, minf=106
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=396566,396179,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=826MiB/s (866MB/s), 826MiB/s-826MiB/s (866MB/s-866MB/s), io=48.4GiB (51.0GB), run=60001-60001msec
  WRITE: bw=825MiB/s (865MB/s), 825MiB/s-825MiB/s (865MB/s-865MB/s), io=48.4GiB (51.9GB), run=60001-60001msec
fiotest: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=4503MiB/s][r=36.0k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=296684: Fri Apr  5 23:05:48 2024
  read: IOPS=35.2k, BW=4397MiB/s (4610MB/s)(258GiB/60002msec)
    slat (usec): min=13, max=10385, avg=225.72, stdev=27.72
    clat (usec): min=2, max=16828, avg=1593.03, stdev=84.97
     lat (usec): min=239, max=18317, avg=1818.99, stdev=92.32
    clat percentiles (usec):
     |  1.00th=[ 1483],  5.00th=[ 1516], 10.00th=[ 1532], 20.00th=[ 1549],
     | 30.00th=[ 1565], 40.00th=[ 1582], 50.00th=[ 1598], 60.00th=[ 1598],
     | 70.00th=[ 1614], 80.00th=[ 1631], 90.00th=[ 1647], 95.00th=[ 1663],
     | 99.00th=[ 1713], 99.50th=[ 1729], 99.90th=[ 1942], 99.95th=[ 2147],
     | 99.99th=[ 3064]
   bw (  MiB/s): min= 4201, max= 4516, per=99.99%, avg=4396.02, stdev= 6.00, samples=957
   iops        : min=33614, max=36134, avg=35168.10, stdev=48.05, samples=957
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=99.92%, 4=0.07%, 10=0.01%, 20=0.01%
  cpu          : usr=0.80%, sys=16.41%, ctx=2112542, majf=1, minf=2146
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2110472,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=4397MiB/s (4610MB/s), 4397MiB/s-4397MiB/s (4610MB/s-4610MB/s), io=258GiB (277GB), run=60002-60002msec
fiotest: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1195MiB/s][w=9556 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=297408: Fri Apr  5 23:06:49 2024
  write: IOPS=10.4k, BW=1304MiB/s (1367MB/s)(76.4GiB/60002msec); 0 zone resets
    slat (usec): min=461, max=12285, avg=764.51, stdev=206.57
    clat (usec): min=2, max=18850, avg=5370.59, stdev=883.60
     lat (usec): min=705, max=19806, avg=6135.47, stdev=989.92
    clat percentiles (usec):
     |  1.00th=[ 4178],  5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 4817],
     | 30.00th=[ 4948], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5276],
     | 70.00th=[ 5473], 80.00th=[ 6128], 90.00th=[ 6718], 95.00th=[ 6783],
     | 99.00th=[ 7046], 99.50th=[ 7504], 99.90th=[15270], 99.95th=[15926],
     | 99.99th=[16909]
   bw (  MiB/s): min= 1022, max= 1532, per=99.99%, avg=1303.54, stdev=13.05, samples=953
   iops        : min= 8176, max=12262, avg=10427.96, stdev=104.41, samples=953
  lat (usec)   : 4=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.10%, 10=99.66%, 20=0.23%
  cpu          : usr=0.77%, sys=6.86%, ctx=1362154, majf=0, minf=83
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,625754,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1304MiB/s (1367MB/s), 1304MiB/s-1304MiB/s (1367MB/s-1367MB/s), io=76.4GiB (82.0GB), run=60002-60002msec
fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=29.2MiB/s,w=29.1MiB/s][r=7487,w=7449 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=420216: Fri Apr  5 23:07:49 2024
  read: IOPS=8314, BW=32.5MiB/s (34.1MB/s)(1949MiB/60001msec)
    slat (usec): min=4, max=178305, avg=326.68, stdev=1097.42
    clat (usec): min=2, max=194708, avg=3372.19, stdev=5057.25
     lat (usec): min=241, max=194951, avg=3699.28, stdev=5321.81
    clat percentiles (usec):
     |  1.00th=[  1844],  5.00th=[  2073], 10.00th=[  2212], 20.00th=[  2376],
     | 30.00th=[  2507], 40.00th=[  2606], 50.00th=[  2737], 60.00th=[  2835],
     | 70.00th=[  2999], 80.00th=[  3228], 90.00th=[  3982], 95.00th=[  7046],
     | 99.00th=[ 11469], 99.50th=[ 16712], 99.90th=[ 85459], 99.95th=[126354],
     | 99.99th=[179307]
   bw (  KiB/s): min=24336, max=43173, per=99.68%, avg=33151.69, stdev=534.35, samples=953
   iops        : min= 6084, max=10793, avg=8287.51, stdev=133.57, samples=953
  write: IOPS=8308, BW=32.5MiB/s (34.0MB/s)(1947MiB/60001msec); 0 zone resets
    slat (usec): min=67, max=189591, avg=627.84, stdev=1871.67
    clat (usec): min=2, max=194470, avg=3368.49, stdev=4910.50
     lat (usec): min=394, max=195325, avg=3997.20, stdev=5479.34
    clat percentiles (usec):
     |  1.00th=[  1860],  5.00th=[  2073], 10.00th=[  2212], 20.00th=[  2376],
     | 30.00th=[  2507], 40.00th=[  2606], 50.00th=[  2737], 60.00th=[  2868],
     | 70.00th=[  2999], 80.00th=[  3228], 90.00th=[  3982], 95.00th=[  7046],
     | 99.00th=[ 11469], 99.50th=[ 16188], 99.90th=[ 84411], 99.95th=[120062],
     | 99.99th=[179307]
   bw (  KiB/s): min=25378, max=43064, per=99.68%, avg=33127.43, stdev=519.52, samples=953
   iops        : min= 6344, max=10765, avg=8281.46, stdev=129.86, samples=953
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=3.14%, 4=86.96%, 10=8.39%, 20=1.12%, 50=0.24%
  lat (msec)   : 100=0.09%, 250=0.07%
  cpu          : usr=0.82%, sys=13.38%, ctx=2887737, majf=0, minf=119
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=498879,498542,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=1949MiB (2043MB), run=60001-60001msec
  WRITE: bw=32.5MiB/s (34.0MB/s), 32.5MiB/s-32.5MiB/s (34.0MB/s-34.0MB/s), io=1947MiB (2042MB), run=60001-60001msec
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=193MiB/s][r=49.5k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=782326: Fri Apr  5 23:08:50 2024
  read: IOPS=50.2k, BW=196MiB/s (206MB/s)(11.5GiB/60001msec)
    slat (usec): min=2, max=100146, avg=157.24, stdev=100.59
    clat (usec): min=2, max=104261, avg=1115.49, stdev=330.15
     lat (usec): min=31, max=104442, avg=1272.97, stdev=360.45
    clat percentiles (usec):
     |  1.00th=[  791],  5.00th=[  922], 10.00th=[  938], 20.00th=[  963],
     | 30.00th=[  988], 40.00th=[ 1020], 50.00th=[ 1057], 60.00th=[ 1090],
     | 70.00th=[ 1156], 80.00th=[ 1270], 90.00th=[ 1418], 95.00th=[ 1500],
     | 99.00th=[ 1696], 99.50th=[ 1795], 99.90th=[ 2245], 99.95th=[ 2704],
     | 99.99th=[ 4015]
   bw (  KiB/s): min=159912, max=235312, per=100.00%, avg=201055.59, stdev=1302.59, samples=952
   iops        : min=39978, max=58828, avg=50263.76, stdev=325.65, samples=952
  lat (usec)   : 4=0.01%, 50=0.05%, 100=0.01%, 250=0.03%, 500=0.12%
  lat (usec)   : 750=0.57%, 1000=33.52%
  lat (msec)   : 2=65.53%, 4=0.17%, 10=0.01%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%, 250=0.01%
  cpu          : usr=1.53%, sys=16.97%, ctx=3107886, majf=0, minf=159
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=3015085,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=196MiB/s (206MB/s), 196MiB/s-196MiB/s (206MB/s-206MB/s), io=11.5GiB (12.3GB), run=60001-60001msec
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=205MiB/s][w=52.5k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1405866: Fri Apr  5 23:09:51 2024
  write: IOPS=53.3k, BW=208MiB/s (219MB/s)(12.2GiB/60002msec); 0 zone resets
    slat (usec): min=55, max=60363, avg=148.13, stdev=294.39
    clat (usec): min=2, max=64701, avg=1050.71, stdev=835.72
     lat (usec): min=87, max=64965, avg=1199.08, stdev=897.38
    clat percentiles (usec):
     |  1.00th=[  717],  5.00th=[  791], 10.00th=[  832], 20.00th=[  889],
     | 30.00th=[  930], 40.00th=[  963], 50.00th=[ 1004], 60.00th=[ 1037],
     | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1237], 95.00th=[ 1336],
     | 99.00th=[ 1614], 99.50th=[ 1827], 99.90th=[ 9372], 99.95th=[11994],
     | 99.99th=[51643]
   bw (  KiB/s): min=183544, max=242104, per=100.00%, avg=213377.85, stdev=1348.49, samples=953
   iops        : min=45886, max=60526, avg=53344.21, stdev=337.13, samples=953
  lat (usec)   : 4=0.01%, 100=0.01%, 250=0.01%, 500=0.01%, 750=2.15%
  lat (usec)   : 1000=47.25%
  lat (msec)   : 2=50.20%, 4=0.15%, 10=0.16%, 20=0.07%, 50=0.01%
  lat (msec)   : 100=0.01%
  cpu          : usr=1.62%, sys=26.18%, ctx=9556092, majf=0, minf=90
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,3200840,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=208MiB/s (219MB/s), 208MiB/s-208MiB/s (219MB/s-219MB/s), io=12.2GiB (13.1GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4#

#Koxia CM6 Raid-10 sync standard

root@quantastor:/# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         844.35  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme1n1     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         836.78  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme2n1     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              1         851.14  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme3n1     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         842.81  GB /   3.84  TB    512   B +  0 B   2.2.0

  pool: qs-6b9a100a-c7ea-c861-4875-16db1ba3acef
 state: ONLINE
config:

        NAME                                           STATE     READ WRITE CKSUM
        qs-6b9a100a-c7ea-c861-4875-16db1ba3acef        ONLINE       0     0     0
          mirror-0                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d201  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6801  ONLINE       0     0     0
          mirror-1                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a201  ONLINE       0     0     0

errors: No known data errors

root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# zfs get all qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe
NAME                                          PROPERTY              VALUE                                                            SOURCE
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  type                  filesystem                                                       -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  creation              Fri Apr  5 19:48 2024                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  used                  104K                                                             -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  available             6.79T                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  referenced            104K                                                             -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  compressratio         1.00x                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  mounted               yes                                                              -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  quota                 none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  reservation           none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  recordsize            128K                                                             local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  mountpoint            /mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  inherited from qs-6b9a100a-c7ea-c861-4875-16db1ba3acef
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  sharenfs              off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  checksum              on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  compression           on                                                               inherited from qs-6b9a100a-c7ea-c861-4875-16db1ba3acef
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  atime                 off                                                              local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  devices               on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  exec                  on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  setuid                on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  readonly              off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  zoned                 off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  snapdir               hidden                                                           default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  aclmode               discard                                                          default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  aclinherit            restricted                                                       default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  createtxg             40                                                               -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  canmount              on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  xattr                 sa                                                               local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  copies                1                                                                default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  version               5                                                                -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  utf8only              off                                                              -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  normalization         none                                                             -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  casesensitivity       sensitive                                                        -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  vscan                 off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  nbmand                off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  sharesmb              off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  refquota              none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  refreservation        none                                                             local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  guid                  3760159109045432288                                              -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  primarycache          metadata                                                         local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  secondarycache        metadata                                                         local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  usedbysnapshots       0B                                                               -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  usedbydataset         104K                                                             -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  usedbychildren        0B                                                               -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  usedbyrefreservation  0B                                                               -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  logbias               latency                                                          default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  objsetid              68                                                               -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  dedup                 off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  mlslabel              none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  sync                  standard                                                         local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  dnodesize             legacy                                                           default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  refcompressratio      1.00x                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  written               104K                                                             -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  logicalused           42.5K                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  logicalreferenced     42.5K                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  volmode               default                                                          default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  filesystem_limit      none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  snapshot_limit        none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  filesystem_count      none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  snapshot_count        none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  snapdev               hidden                                                           default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  acltype               posix                                                            inherited from qs-6b9a100a-c7ea-c861-4875-16db1ba3acef
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  context               none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  fscontext             none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  defcontext            none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  rootcontext           none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  relatime              off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  redundant_metadata    all                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  overlay               on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  encryption            off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  keylocation           none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  keyformat             none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  pbkdf2iters           0                                                                default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  special_small_blocks  0                                                                default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  quantastor:shareid    c5771906-907a-7701-feb8-a26c8caf1adf                             local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  quantastor:name       R-10                                                             inherited from qs-6b9a100a-c7ea-c861-4875-16db1ba3acef
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe#

root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
fiotest: Laying out IO file (1 file / 102400MiB)
Jobs: 8 (f=8): [m(8)][100.0%][r=3306MiB/s,w=3287MiB/s][r=3306,w=3287 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1724373: Fri Apr  5 19:52:36 2024
  read: IOPS=3260, BW=3260MiB/s (3419MB/s)(191GiB/60001msec)
    slat (usec): min=156, max=166753, avg=1829.05, stdev=6157.65
    clat (usec): min=3, max=188926, avg=8459.71, stdev=15143.91
     lat (usec): min=547, max=191273, avg=10291.01, stdev=17177.93
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    4], 40.00th=[    5], 50.00th=[    6], 60.00th=[    7],
     | 70.00th=[    8], 80.00th=[   10], 90.00th=[   12], 95.00th=[   16],
     | 99.00th=[  105], 99.50th=[  136], 99.90th=[  155], 99.95th=[  159],
     | 99.99th=[  171]
   bw (  MiB/s): min= 1862, max= 4725, per=99.14%, avg=3232.39, stdev=77.12, samples=957
   iops        : min= 1861, max= 4724, avg=3231.03, stdev=77.14, samples=957
  write: IOPS=3260, BW=3260MiB/s (3419MB/s)(191GiB/60001msec); 0 zone resets
    slat (usec): min=197, max=178283, avg=608.72, stdev=3486.54
    clat (usec): min=3, max=181325, avg=8721.92, stdev=15831.77
     lat (usec): min=483, max=182044, avg=9331.38, stdev=16312.58
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    4], 40.00th=[    5], 50.00th=[    6], 60.00th=[    7],
     | 70.00th=[    8], 80.00th=[   10], 90.00th=[   13], 95.00th=[   16],
     | 99.00th=[  112], 99.50th=[  138], 99.90th=[  155], 99.95th=[  159],
     | 99.99th=[  171]
   bw (  MiB/s): min= 1802, max= 4955, per=99.18%, avg=3233.63, stdev=79.89, samples=957
   iops        : min= 1801, max= 4954, avg=3232.19, stdev=79.93, samples=957
  lat (usec)   : 4=0.01%, 10=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=33.69%, 10=48.97%, 20=13.83%, 50=1.45%
  lat (msec)   : 100=0.91%, 250=1.14%
  cpu          : usr=2.92%, sys=26.44%, ctx=925342, majf=0, minf=100
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=195628,195620,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=3260MiB/s (3419MB/s), 3260MiB/s-3260MiB/s (3419MB/s-3419MB/s), io=191GiB (205GB), run=60001-60001msec
  WRITE: bw=3260MiB/s (3419MB/s), 3260MiB/s-3260MiB/s (3419MB/s-3419MB/s), io=191GiB (205GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=10.4GiB/s][r=10.6k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2010639: Fri Apr  5 19:54:13 2024
  read: IOPS=12.3k, BW=11.0GiB/s (12.9GB/s)(718GiB/60001msec)
    slat (usec): min=130, max=6113, avg=648.08, stdev=143.60
    clat (usec): min=3, max=11287, avg=4569.84, stdev=674.26
     lat (usec): min=613, max=12332, avg=5218.68, stdev=758.38
    clat percentiles (usec):
     |  1.00th=[ 3359],  5.00th=[ 3556], 10.00th=[ 3720], 20.00th=[ 3982],
     | 30.00th=[ 4178], 40.00th=[ 4359], 50.00th=[ 4555], 60.00th=[ 4686],
     | 70.00th=[ 4883], 80.00th=[ 5080], 90.00th=[ 5473], 95.00th=[ 5735],
     | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 7963],
     | 99.99th=[ 8979]
   bw (  MiB/s): min= 9588, max=15806, per=100.00%, avg=12262.56, stdev=177.29, samples=956
   iops        : min= 9588, max=15806, avg=12262.34, stdev=177.29, samples=956
  lat (usec)   : 4=0.01%, 10=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=20.87%, 10=79.12%, 20=0.01%
  cpu          : usr=0.86%, sys=45.14%, ctx=1079783, majf=0, minf=16474
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=735579,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=11.0GiB/s (12.9GB/s), 11.0GiB/s-11.0GiB/s (12.9GB/s-12.9GB/s), io=718GiB (771GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=4366MiB/s][w=4366 IOPS][eta 00m:00s]s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2769517: Fri Apr  5 19:56:14 2024
  write: IOPS=5549, BW=5549MiB/s (5819MB/s)(325GiB/60003msec); 0 zone resets
    slat (usec): min=189, max=427718, avg=1435.08, stdev=3882.16
    clat (usec): min=2, max=575698, avg=10094.13, stdev=13957.73
     lat (usec): min=1546, max=584740, avg=11530.02, stdev=15240.11
    clat percentiles (usec):
     |  1.00th=[  1811],  5.00th=[  2311], 10.00th=[  2769], 20.00th=[  5080],
     | 30.00th=[  6915], 40.00th=[  8717], 50.00th=[  9634], 60.00th=[ 10421],
     | 70.00th=[ 11207], 80.00th=[ 12125], 90.00th=[ 14091], 95.00th=[ 16581],
     | 99.00th=[ 34341], 99.50th=[ 68682], 99.90th=[221250], 99.95th=[291505],
     | 99.99th=[467665]
   bw (  MiB/s): min=  481, max=16464, per=99.20%, avg=5504.81, stdev=301.43, samples=957
   iops        : min=  479, max=16464, avg=5503.03, stdev=301.42, samples=957
  lat (usec)   : 4=0.01%, 10=0.01%, 20=0.01%
  lat (msec)   : 2=2.26%, 4=13.66%, 10=39.34%, 20=42.06%, 50=2.03%
  lat (msec)   : 100=0.30%, 250=0.28%, 500=0.07%, 750=0.01%
  cpu          : usr=4.03%, sys=21.17%, ctx=3005278, majf=0, minf=103
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,332985,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=5549MiB/s (5819MB/s), 5549MiB/s-5549MiB/s (5819MB/s-5819MB/s), io=325GiB (349GB), run=60003-60003msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=2376MiB/s,w=2358MiB/s][r=19.0k,w=18.9k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2914476: Fri Apr  5 19:57:35 2024
  read: IOPS=18.6k, BW=2321MiB/s (2433MB/s)(136GiB/60001msec)
    slat (usec): min=12, max=195024, avg=377.39, stdev=1628.89
    clat (usec): min=2, max=196405, avg=1498.75, stdev=3762.80
     lat (usec): min=197, max=196606, avg=1876.68, stdev=4330.41
    clat percentiles (usec):
     |  1.00th=[   392],  5.00th=[   570], 10.00th=[   627], 20.00th=[   791],
     | 30.00th=[   898], 40.00th=[   996], 50.00th=[  1106], 60.00th=[  1237],
     | 70.00th=[  1418], 80.00th=[  1778], 90.00th=[  2376], 95.00th=[  2900],
     | 99.00th=[  4883], 99.50th=[  8225], 99.90th=[ 53216], 99.95th=[ 86508],
     | 99.99th=[164627]
   bw (  MiB/s): min= 1396, max= 2941, per=99.03%, avg=2298.26, stdev=36.76, samples=952
   iops        : min=11169, max=23534, avg=18384.94, stdev=294.09, samples=952
  write: IOPS=18.6k, BW=2323MiB/s (2436MB/s)(136GiB/60001msec); 0 zone resets
    slat (usec): min=19, max=171250, avg=46.26, stdev=611.68
    clat (usec): min=2, max=196283, avg=1519.49, stdev=3932.32
     lat (usec): min=27, max=196324, avg=1565.95, stdev=3988.50
    clat percentiles (usec):
     |  1.00th=[   400],  5.00th=[   578], 10.00th=[   627], 20.00th=[   791],
     | 30.00th=[   898], 40.00th=[   996], 50.00th=[  1106], 60.00th=[  1237],
     | 70.00th=[  1434], 80.00th=[  1795], 90.00th=[  2409], 95.00th=[  2933],
     | 99.00th=[  4948], 99.50th=[  8848], 99.90th=[ 56361], 99.95th=[ 91751],
     | 99.99th=[164627]
   bw (  MiB/s): min= 1350, max= 2978, per=99.03%, avg=2300.48, stdev=38.17, samples=952
   iops        : min=10800, max=23830, avg=18402.66, stdev=305.33, samples=952
  lat (usec)   : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.44%
  lat (usec)   : 500=3.36%, 750=11.71%, 1000=25.06%
  lat (msec)   : 2=43.57%, 4=14.22%, 10=1.19%, 20=0.19%, 50=0.15%
  lat (msec)   : 100=0.07%, 250=0.04%
  cpu          : usr=2.29%, sys=17.75%, ctx=1717283, majf=0, minf=104
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1113958,1115077,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=2321MiB/s (2433MB/s), 2321MiB/s-2321MiB/s (2433MB/s-2433MB/s), io=136GiB (146GB), run=60001-60001msec
  WRITE: bw=2323MiB/s (2436MB/s), 2323MiB/s-2323MiB/s (2436MB/s-2436MB/s), io=136GiB (146GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=4484MiB/s][r=35.9k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3145541: Fri Apr  5 19:58:51 2024
  read: IOPS=36.7k, BW=4582MiB/s (4805MB/s)(268GiB/60002msec)
    slat (usec): min=12, max=10286, avg=216.50, stdev=26.82
    clat (usec): min=2, max=11610, avg=1528.57, stdev=72.97
     lat (usec): min=208, max=11834, avg=1745.30, stdev=78.31
    clat percentiles (usec):
     |  1.00th=[ 1418],  5.00th=[ 1467], 10.00th=[ 1483], 20.00th=[ 1500],
     | 30.00th=[ 1516], 40.00th=[ 1516], 50.00th=[ 1532], 60.00th=[ 1532],
     | 70.00th=[ 1549], 80.00th=[ 1565], 90.00th=[ 1582], 95.00th=[ 1598],
     | 99.00th=[ 1647], 99.50th=[ 1663], 99.90th=[ 1860], 99.95th=[ 2147],
     | 99.99th=[ 2966]
   bw (  MiB/s): min= 4452, max= 4694, per=100.00%, avg=4582.68, stdev= 3.70, samples=953
   iops        : min=35616, max=37552, avg=36661.34, stdev=29.59, samples=953
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=99.93%, 4=0.06%, 10=0.01%, 20=0.01%
  cpu          : usr=0.99%, sys=15.15%, ctx=2201763, majf=0, minf=2131
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2199504,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=4582MiB/s (4805MB/s), 4582MiB/s-4582MiB/s (4805MB/s-4805MB/s), io=268GiB (288GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=4935MiB/s][w=39.5k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3146158: Fri Apr  5 20:00:14 2024
  write: IOPS=49.7k, BW=6218MiB/s (6520MB/s)(364GiB/60001msec); 0 zone resets
    slat (usec): min=14, max=415402, avg=157.58, stdev=1118.80
    clat (usec): min=2, max=477365, avg=1127.46, stdev=3441.88
     lat (usec): min=150, max=477513, avg=1285.54, stdev=3730.63
    clat percentiles (usec):
     |  1.00th=[   194],  5.00th=[   219], 10.00th=[   281], 20.00th=[   594],
     | 30.00th=[   824], 40.00th=[   979], 50.00th=[  1074], 60.00th=[  1139],
     | 70.00th=[  1221], 80.00th=[  1319], 90.00th=[  1516], 95.00th=[  1713],
     | 99.00th=[  2474], 99.50th=[  4178], 99.90th=[ 30278], 99.95th=[ 60031],
     | 99.99th=[158335]
   bw (  MiB/s): min=  933, max=18359, per=99.38%, avg=6179.81, stdev=296.48, samples=952
   iops        : min= 7464, max=146870, avg=49436.64, stdev=2371.84, samples=952
  lat (usec)   : 4=0.01%, 10=0.01%, 250=8.27%, 500=8.70%, 750=9.20%
  lat (usec)   : 1000=15.44%
  lat (msec)   : 2=56.28%, 4=1.59%, 10=0.23%, 20=0.12%, 50=0.10%
  lat (msec)   : 100=0.04%, 250=0.02%, 500=0.01%
  cpu          : usr=2.96%, sys=20.17%, ctx=3376594, majf=0, minf=96
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2984827,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=6218MiB/s (6520MB/s), 6218MiB/s-6218MiB/s (6520MB/s-6520MB/s), io=364GiB (391GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=40.1MiB/s,w=41.6MiB/s][r=10.3k,w=10.7k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3348132: Fri Apr  5 20:01:26 2024
  read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(2376MiB/60001msec)
    slat (usec): min=3, max=184682, avg=271.82, stdev=1208.81
    clat (usec): min=2, max=190902, avg=2756.24, stdev=4800.84
     lat (usec): min=214, max=191120, avg=3028.46, stdev=5060.12
    clat percentiles (usec):
     |  1.00th=[  1532],  5.00th=[  1696], 10.00th=[  1795], 20.00th=[  1926],
     | 30.00th=[  2024], 40.00th=[  2114], 50.00th=[  2212], 60.00th=[  2311],
     | 70.00th=[  2442], 80.00th=[  2638], 90.00th=[  4015], 95.00th=[  5145],
     | 99.00th=[  8586], 99.50th=[ 11994], 99.90th=[ 79168], 99.95th=[132645],
     | 99.99th=[177210]
   bw (  KiB/s): min=31432, max=48407, per=99.30%, avg=40271.06, stdev=427.90, samples=952
   iops        : min= 7858, max=12101, avg=10067.25, stdev=106.97, samples=952
  write: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(2376MiB/60001msec); 0 zone resets
    slat (usec): min=5, max=186604, avg=509.55, stdev=1815.44
    clat (usec): min=2, max=190668, avg=2770.08, stdev=4916.91
     lat (usec): min=332, max=191295, avg=3280.11, stdev=5427.39
    clat percentiles (usec):
     |  1.00th=[  1532],  5.00th=[  1696], 10.00th=[  1795], 20.00th=[  1926],
     | 30.00th=[  2024], 40.00th=[  2114], 50.00th=[  2212], 60.00th=[  2311],
     | 70.00th=[  2442], 80.00th=[  2671], 90.00th=[  4015], 95.00th=[  5145],
     | 99.00th=[  8717], 99.50th=[ 12125], 99.90th=[ 81265], 99.95th=[133694],
     | 99.99th=[177210]
   bw (  KiB/s): min=31936, max=48202, per=99.29%, avg=40268.85, stdev=397.09, samples=952
   iops        : min= 7984, max=12049, avg=10066.67, stdev=99.26, samples=952
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=27.67%, 4=62.23%, 10=9.38%, 20=0.42%, 50=0.15%
  lat (msec)   : 100=0.07%, 250=0.07%
  cpu          : usr=0.90%, sys=12.18%, ctx=2105106, majf=0, minf=112
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=608322,608363,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=2376MiB (2492MB), run=60001-60001msec
  WRITE: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=2376MiB (2492MB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=196MiB/s][r=50.3k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3779115: Fri Apr  5 20:02:43 2024
  read: IOPS=52.1k, BW=204MiB/s (213MB/s)(11.9GiB/60001msec)
    slat (usec): min=2, max=19385, avg=151.57, stdev=42.59
    clat (usec): min=2, max=20304, avg=1075.52, stdev=168.25
     lat (usec): min=25, max=20439, avg=1227.30, stdev=185.82
    clat percentiles (usec):
     |  1.00th=[  611],  5.00th=[  906], 10.00th=[  938], 20.00th=[  971],
     | 30.00th=[  996], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1090],
     | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[ 1270], 95.00th=[ 1352],
     | 99.00th=[ 1483], 99.50th=[ 1532], 99.90th=[ 1745], 99.95th=[ 2114],
     | 99.99th=[ 2933]
   bw (  KiB/s): min=178000, max=266688, per=100.00%, avg=208501.89, stdev=1678.30, samples=952
   iops        : min=44500, max=66672, avg=52125.33, stdev=419.58, samples=952
  lat (usec)   : 4=0.01%, 50=0.04%, 100=0.01%, 250=0.02%, 500=0.26%
  lat (usec)   : 750=2.06%, 1000=29.28%
  lat (msec)   : 2=68.29%, 4=0.05%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=1.57%, sys=16.68%, ctx=3166488, majf=0, minf=167
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=3127252,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=204MiB/s (213MB/s), 204MiB/s-204MiB/s (213MB/s-213MB/s), io=11.9GiB (12.8GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1097MiB/s][w=281k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=249896: Fri Apr  5 20:03:58 2024
  write: IOPS=278k, BW=1087MiB/s (1139MB/s)(63.7GiB/60002msec); 0 zone resets
    slat (usec): min=3, max=29920, avg=27.46, stdev=135.85
    clat (nsec): min=1937, max=34136k, avg=202184.90, stdev=362113.81
     lat (usec): min=11, max=34147, avg=229.76, stdev=386.47
    clat percentiles (usec):
     |  1.00th=[   57],  5.00th=[   80], 10.00th=[   94], 20.00th=[  111],
     | 30.00th=[  123], 40.00th=[  135], 50.00th=[  153], 60.00th=[  172],
     | 70.00th=[  194], 80.00th=[  235], 90.00th=[  420], 95.00th=[  482],
     | 99.00th=[  553], 99.50th=[  594], 99.90th=[ 3687], 99.95th=[ 7177],
     | 99.99th=[21365]
   bw (  MiB/s): min= 1037, max= 1134, per=99.94%, avg=1085.92, stdev= 2.71, samples=953
   iops        : min=265628, max=290554, avg=277994.78, stdev=693.13, samples=953
  lat (usec)   : 2=0.01%, 4=0.01%, 20=0.01%, 50=0.56%, 100=12.41%
  lat (usec)   : 250=68.60%, 500=15.03%, 750=3.18%, 1000=0.06%
  lat (msec)   : 2=0.05%, 4=0.01%, 10=0.07%, 20=0.01%, 50=0.01%
  cpu          : usr=5.43%, sys=56.78%, ctx=7545337, majf=0, minf=87
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,16689512,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1087MiB/s (1139MB/s), 1087MiB/s-1087MiB/s (1139MB/s-1139MB/s), io=63.7GiB (68.4GB), run=60002-60002msec
#Koxia CM6 RaidZ1 sync standard

root@quantastor:/# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         844.35  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme1n1     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         836.78  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme2n1     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              1         851.14  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme3n1     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         842.81  GB /   3.84  TB    512   B +  0 B   2.2.0

  pool: qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5
 state: ONLINE
config:

        NAME                                           STATE     READ WRITE CKSUM
        qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5        ONLINE       0     0     0
          raidz1-0                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d201  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a201  ONLINE       0     0     0

errors: No known data errors

root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# zfs get all qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe
NAME                                          PROPERTY              VALUE                                                            SOURCE
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  type                  filesystem                                                       -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  creation              Fri Apr  5 17:19 2024                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  used                  97.4G                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  available             9.86T                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  referenced            97.4G                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  compressratio         1.02x                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  mounted               yes                                                              -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  quota                 none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  reservation           none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  recordsize            128K                                                             local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  mountpoint            /mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  inherited from qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  sharenfs              off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  checksum              on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  compression           on                                                               inherited from qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  atime                 on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  devices               on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  exec                  on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  setuid                on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  readonly              off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  zoned                 off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  snapdir               hidden                                                           default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  aclmode               discard                                                          default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  aclinherit            restricted                                                       default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  createtxg             35                                                               -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  canmount              on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  xattr                 sa                                                               local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  copies                1                                                                default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  version               5                                                                -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  utf8only              off                                                              -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  normalization         none                                                             -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  casesensitivity       sensitive                                                        -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  vscan                 off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  nbmand                off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  sharesmb              off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  refquota              none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  refreservation        none                                                             local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  guid                  15109964299467543167                                             -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  primarycache          metadata                                                         local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  secondarycache        metadata                                                         local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  usedbysnapshots       0B                                                               -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  usedbydataset         97.4G                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  usedbychildren        0B                                                               -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  usedbyrefreservation  0B                                                               -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  logbias               latency                                                          default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  objsetid              164                                                              -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  dedup                 off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  mlslabel              none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  sync                  standard                                                         local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  dnodesize             legacy                                                           default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  refcompressratio      1.02x                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  written               97.4G                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  logicalused           100G                                                             -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  logicalreferenced     100G                                                             -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  volmode               default                                                          default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  filesystem_limit      none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  snapshot_limit        none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  filesystem_count      none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  snapshot_count        none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  snapdev               hidden                                                           default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  acltype               posix                                                            inherited from qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  context               none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  fscontext             none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  defcontext            none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  rootcontext           none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  relatime              off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  redundant_metadata    all                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  overlay               on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  encryption            off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  keylocation           none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  keyformat             none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  pbkdf2iters           0                                                                default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  special_small_blocks  0                                                                default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  quantastor:shareid    515c12c1-bca4-cd8c-2e3c-89dcd5b6efea                             local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  quantastor:name       NVMe-Test-No-NameSpaces                                          inherited from qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5

root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=0): [f(8)][100.0%][r=2892MiB/s,w=2823MiB/s][r=2891,w=2823 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=117908: Fri Apr  5 19:14:13 2024
  read: IOPS=2938, BW=2938MiB/s (3081MB/s)(172GiB/60017msec)
    slat (usec): min=163, max=235340, avg=1964.75, stdev=7331.79
    clat (usec): min=4, max=257433, avg=9398.25, stdev=19217.77
     lat (usec): min=843, max=258635, avg=11364.66, stdev=21521.55
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    5], 40.00th=[    5], 50.00th=[    6], 60.00th=[    6],
     | 70.00th=[    7], 80.00th=[    9], 90.00th=[   13], 95.00th=[   23],
     | 99.00th=[  126], 99.50th=[  167], 99.90th=[  194], 99.95th=[  203],
     | 99.99th=[  228]
   bw (  MiB/s): min= 1201, max= 4910, per=99.25%, avg=2916.23, stdev=104.06, samples=953
   iops        : min= 1200, max= 4909, avg=2914.71, stdev=104.08, samples=953
  write: IOPS=2939, BW=2940MiB/s (3083MB/s)(172GiB/60017msec); 0 zone resets
    slat (usec): min=175, max=196457, avg=736.46, stdev=4461.86
    clat (usec): min=3, max=258332, avg=9658.87, stdev=19970.13
     lat (usec): min=389, max=258669, avg=10396.18, stdev=20763.87
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    5], 40.00th=[    5], 50.00th=[    6], 60.00th=[    6],
     | 70.00th=[    7], 80.00th=[    9], 90.00th=[   13], 95.00th=[   23],
     | 99.00th=[  136], 99.50th=[  169], 99.90th=[  197], 99.95th=[  207],
     | 99.99th=[  239]
   bw (  MiB/s): min= 1126, max= 4991, per=99.30%, avg=2919.06, stdev=106.26, samples=953
   iops        : min= 1123, max= 4990, avg=2917.49, stdev=106.27, samples=953
  lat (usec)   : 4=0.01%, 10=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=22.26%, 10=63.89%, 20=8.22%, 50=3.00%
  lat (msec)   : 100=1.17%, 250=1.44%, 500=0.01%
  cpu          : usr=2.39%, sys=26.15%, ctx=1439444, majf=0, minf=99
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=176348,176433,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=2938MiB/s (3081MB/s), 2938MiB/s-2938MiB/s (3081MB/s-3081MB/s), io=172GiB (185GB), run=60017-60017msec
  WRITE: bw=2940MiB/s (3083MB/s), 2940MiB/s-2940MiB/s (3083MB/s-3083MB/s), io=172GiB (185GB), run=60017-60017msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=8857MiB/s][r=8857 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=356931: Fri Apr  5 19:15:22 2024
  read: IOPS=9653, BW=9654MiB/s (10.1GB/s)(566GiB/60001msec)
    slat (usec): min=127, max=12644, avg=824.04, stdev=199.36
    clat (usec): min=3, max=22274, avg=5802.72, stdev=1067.96
     lat (usec): min=462, max=23298, avg=6627.60, stdev=1204.00
    clat percentiles (usec):
     |  1.00th=[ 3326],  5.00th=[ 3818], 10.00th=[ 4293], 20.00th=[ 4948],
     | 30.00th=[ 5342], 40.00th=[ 5604], 50.00th=[ 5866], 60.00th=[ 6128],
     | 70.00th=[ 6390], 80.00th=[ 6652], 90.00th=[ 7111], 95.00th=[ 7373],
     | 99.00th=[ 8094], 99.50th=[ 8586], 99.90th=[10028], 99.95th=[11207],
     | 99.99th=[13829]
   bw (  MiB/s): min= 7616, max=16288, per=99.99%, avg=9652.37, stdev=193.58, samples=960
   iops        : min= 7616, max=16288, avg=9652.27, stdev=193.59, samples=960
  lat (usec)   : 4=0.01%, 10=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=6.84%, 10=93.06%, 20=0.10%, 50=0.01%
  cpu          : usr=0.71%, sys=48.11%, ctx=962165, majf=0, minf=16483
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=579230,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=9654MiB/s (10.1GB/s), 9654MiB/s-9654MiB/s (10.1GB/s-10.1GB/s), io=566GiB (607GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=6310MiB/s][w=6309 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1338075: Fri Apr  5 19:16:39 2024
  write: IOPS=6290, BW=6290MiB/s (6596MB/s)(369GiB/60108msec); 0 zone resets
    slat (usec): min=179, max=586852, avg=1263.41, stdev=5867.56
    clat (usec): min=3, max=721602, avg=8894.64, stdev=20323.85
     lat (usec): min=716, max=940720, avg=10159.29, stdev=22121.52
    clat percentiles (usec):
     |  1.00th=[  1762],  5.00th=[  2073], 10.00th=[  2343], 20.00th=[  2933],
     | 30.00th=[  3687], 40.00th=[  5080], 50.00th=[  6521], 60.00th=[  8029],
     | 70.00th=[  9503], 80.00th=[ 10945], 90.00th=[ 13435], 95.00th=[ 16909],
     | 99.00th=[ 45351], 99.50th=[109577], 99.90th=[346031], 99.95th=[442500],
     | 99.99th=[574620]
   bw (  MiB/s): min=  124, max=18148, per=98.36%, avg=6187.21, stdev=406.32, samples=959
   iops        : min=  120, max=18147, avg=6184.76, stdev=406.34, samples=959
  lat (usec)   : 4=0.01%, 10=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=3.72%, 4=28.94%, 10=41.15%, 20=22.88%, 50=2.37%
  lat (msec)   : 100=0.38%, 250=0.34%, 500=0.18%, 750=0.02%
  cpu          : usr=4.29%, sys=23.36%, ctx=2821045, majf=0, minf=93
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,378093,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=6290MiB/s (6596MB/s), 6290MiB/s-6290MiB/s (6596MB/s-6596MB/s), io=369GiB (396GB), run=60108-60108msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe#


--------------------------------------------------------------------------------------------------------------


root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=0): [f(8)][100.0%][r=2023MiB/s,w=2039MiB/s][r=16.2k,w=16.3k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1430277: Fri Apr  5 19:18:37 2024
  read: IOPS=17.6k, BW=2197MiB/s (2304MB/s)(129GiB/60001msec)
    slat (usec): min=13, max=191171, avg=395.01, stdev=2052.38
    clat (usec): min=2, max=225216, avg=1586.90, stdev=5009.18
     lat (usec): min=215, max=225700, avg=1982.66, stdev=5770.67
    clat percentiles (usec):
     |  1.00th=[   383],  5.00th=[   553], 10.00th=[   627], 20.00th=[   775],
     | 30.00th=[   898], 40.00th=[   988], 50.00th=[  1106], 60.00th=[  1237],
     | 70.00th=[  1418], 80.00th=[  1713], 90.00th=[  2212], 95.00th=[  2671],
     | 99.00th=[  7635], 99.50th=[ 15270], 99.90th=[ 79168], 99.95th=[125305],
     | 99.99th=[183501]
   bw (  MiB/s): min= 1028, max= 2921, per=99.00%, avg=2175.23, stdev=48.16, samples=952
   iops        : min= 8221, max=23370, avg=17400.34, stdev=385.27, samples=952
  write: IOPS=17.6k, BW=2199MiB/s (2306MB/s)(129GiB/60001msec); 0 zone resets
    slat (usec): min=18, max=180440, avg=51.58, stdev=633.01
    clat (usec): min=9, max=225418, avg=1601.59, stdev=5028.93
     lat (usec): min=39, max=225451, avg=1653.57, stdev=5097.86
    clat percentiles (usec):
     |  1.00th=[   388],  5.00th=[   562], 10.00th=[   627], 20.00th=[   783],
     | 30.00th=[   898], 40.00th=[   996], 50.00th=[  1106], 60.00th=[  1237],
     | 70.00th=[  1434], 80.00th=[  1729], 90.00th=[  2245], 95.00th=[  2737],
     | 99.00th=[  8094], 99.50th=[ 15401], 99.90th=[ 78119], 99.95th=[126354],
     | 99.99th=[183501]
   bw (  MiB/s): min= 1007, max= 2957, per=99.00%, avg=2176.91, stdev=48.64, samples=952
   iops        : min= 8061, max=23658, avg=17413.94, stdev=389.15, samples=952
  lat (usec)   : 4=0.01%, 10=0.01%, 50=0.01%, 100=0.01%, 250=0.40%
  lat (usec)   : 500=3.23%, 750=13.28%, 1000=23.83%
  lat (msec)   : 2=45.61%, 4=11.85%, 10=0.98%, 20=0.44%, 50=0.21%
  lat (msec)   : 100=0.11%, 250=0.07%
  cpu          : usr=2.12%, sys=18.56%, ctx=2007845, majf=0, minf=106
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1054652,1055486,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=2197MiB/s (2304MB/s), 2197MiB/s-2197MiB/s (2304MB/s-2304MB/s), io=129GiB (138GB), run=60001-60001msec
  WRITE: bw=2199MiB/s (2306MB/s), 2199MiB/s-2199MiB/s (2306MB/s-2306MB/s), io=129GiB (138GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=4850MiB/s][r=38.8k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1841840: Fri Apr  5 19:19:52 2024
  read: IOPS=38.8k, BW=4848MiB/s (5084MB/s)(284GiB/60002msec)
    slat (usec): min=11, max=4189, avg=204.52, stdev=23.77
    clat (usec): min=2, max=5437, avg=1444.78, stdev=65.20
     lat (usec): min=184, max=5637, avg=1649.53, stdev=70.19
    clat percentiles (usec):
     |  1.00th=[ 1352],  5.00th=[ 1385], 10.00th=[ 1401], 20.00th=[ 1418],
     | 30.00th=[ 1418], 40.00th=[ 1434], 50.00th=[ 1434], 60.00th=[ 1450],
     | 70.00th=[ 1450], 80.00th=[ 1467], 90.00th=[ 1483], 95.00th=[ 1516],
     | 99.00th=[ 1614], 99.50th=[ 1663], 99.90th=[ 2180], 99.95th=[ 2606],
     | 99.99th=[ 3294]
   bw (  MiB/s): min= 4747, max= 4988, per=100.00%, avg=4847.97, stdev= 4.26, samples=953
   iops        : min=37982, max=39904, avg=38783.68, stdev=34.09, samples=953
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02%
  lat (msec)   : 2=99.85%, 4=0.13%, 10=0.01%
  cpu          : usr=1.04%, sys=16.59%, ctx=2335195, majf=0, minf=2142
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2327176,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=4848MiB/s (5084MB/s), 4848MiB/s-4848MiB/s (5084MB/s-5084MB/s), io=284GiB (305GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=4772MiB/s][w=38.2k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1866055: Fri Apr  5 19:21:05 2024
  write: IOPS=44.6k, BW=5581MiB/s (5852MB/s)(327GiB/60019msec); 0 zone resets
    slat (usec): min=15, max=250486, avg=175.46, stdev=1104.00
    clat (usec): min=2, max=434993, avg=1255.99, stdev=3904.36
     lat (usec): min=47, max=435203, avg=1431.93, stdev=4303.05
    clat percentiles (usec):
     |  1.00th=[   192],  5.00th=[   221], 10.00th=[   277], 20.00th=[   529],
     | 30.00th=[   807], 40.00th=[  1012], 50.00th=[  1106], 60.00th=[  1205],
     | 70.00th=[  1303], 80.00th=[  1450], 90.00th=[  1713], 95.00th=[  2024],
     | 99.00th=[  3326], 99.50th=[  6259], 99.90th=[ 55837], 99.95th=[ 86508],
     | 99.99th=[162530]
   bw (  MiB/s): min=  375, max=18644, per=99.22%, avg=5537.70, stdev=346.37, samples=955
   iops        : min= 3003, max=149155, avg=44299.76, stdev=2770.94, samples=955
  lat (usec)   : 4=0.01%, 10=0.01%, 20=0.01%, 100=0.01%, 250=8.30%
  lat (usec)   : 500=11.04%, 750=8.44%, 1000=11.57%
  lat (msec)   : 2=55.45%, 4=4.48%, 10=0.34%, 20=0.11%, 50=0.16%
  lat (msec)   : 100=0.08%, 250=0.04%, 500=0.01%
  cpu          : usr=2.71%, sys=18.52%, ctx=3403340, majf=0, minf=100
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2679806,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=5581MiB/s (5852MB/s), 5581MiB/s-5581MiB/s (5852MB/s-5852MB/s), io=327GiB (351GB), run=60019-60019msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe#

------------------------------

root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=36.3MiB/s,w=36.6MiB/s][r=9290,w=9376 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2096720: Fri Apr  5 19:22:48 2024
  read: IOPS=9877, BW=38.6MiB/s (40.5MB/s)(2315MiB/60001msec)
    slat (usec): min=3, max=179927, avg=277.10, stdev=1152.96
    clat (usec): min=2, max=214736, avg=2826.93, stdev=5549.25
     lat (usec): min=195, max=214918, avg=3104.44, stdev=5903.71
    clat percentiles (usec):
     |  1.00th=[  1500],  5.00th=[  1663], 10.00th=[  1762], 20.00th=[  1909],
     | 30.00th=[  2008], 40.00th=[  2114], 50.00th=[  2180], 60.00th=[  2311],
     | 70.00th=[  2442], 80.00th=[  2704], 90.00th=[  3818], 95.00th=[  4817],
     | 99.00th=[  9896], 99.50th=[ 18220], 99.90th=[100140], 99.95th=[133694],
     | 99.99th=[187696]
   bw (  KiB/s): min=29496, max=49332, per=99.55%, avg=39334.38, stdev=539.28, samples=959
   iops        : min= 7374, max=12332, avg=9832.97, stdev=134.81, samples=959
  write: IOPS=9871, BW=38.6MiB/s (40.4MB/s)(2314MiB/60001msec); 0 zone resets
    slat (usec): min=6, max=192301, avg=524.56, stdev=1783.98
    clat (usec): min=2, max=214869, avg=2847.52, stdev=5809.35
     lat (usec): min=363, max=215478, avg=3372.68, stdev=6452.07
    clat percentiles (usec):
     |  1.00th=[  1500],  5.00th=[  1680], 10.00th=[  1778], 20.00th=[  1909],
     | 30.00th=[  2008], 40.00th=[  2114], 50.00th=[  2212], 60.00th=[  2311],
     | 70.00th=[  2442], 80.00th=[  2704], 90.00th=[  3818], 95.00th=[  4817],
     | 99.00th=[ 10159], 99.50th=[ 18744], 99.90th=[104334], 99.95th=[145753],
     | 99.99th=[189793]
   bw (  KiB/s): min=30072, max=48888, per=99.56%, avg=39310.82, stdev=513.14, samples=959
   iops        : min= 7518, max=12222, avg=9827.09, stdev=128.27, samples=959
  lat (usec)   : 4=0.01%, 10=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=29.47%, 4=61.53%, 10=7.99%, 20=0.54%, 50=0.22%
  lat (msec)   : 100=0.14%, 250=0.10%
  cpu          : usr=0.97%, sys=14.17%, ctx=2299868, majf=0, minf=106
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=592676,592305,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=2315MiB (2428MB), run=60001-60001msec
  WRITE: bw=38.6MiB/s (40.4MB/s), 38.6MiB/s-38.6MiB/s (40.4MB/s-40.4MB/s), io=2314MiB (2426MB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=184MiB/s][r=47.0k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2901788: Fri Apr  5 19:24:05 2024
  read: IOPS=47.5k, BW=185MiB/s (194MB/s)(10.9GiB/60001msec)
    slat (usec): min=2, max=12910, avg=166.33, stdev=46.73
    clat (usec): min=2, max=14061, avg=1180.57, stdev=222.87
     lat (usec): min=25, max=14233, avg=1347.17, stdev=249.72
    clat percentiles (usec):
     |  1.00th=[  685],  5.00th=[  865], 10.00th=[  889], 20.00th=[  955],
     | 30.00th=[ 1045], 40.00th=[ 1172], 50.00th=[ 1237], 60.00th=[ 1270],
     | 70.00th=[ 1303], 80.00th=[ 1352], 90.00th=[ 1401], 95.00th=[ 1467],
     | 99.00th=[ 1680], 99.50th=[ 1778], 99.90th=[ 2409], 99.95th=[ 2704],
     | 99.99th=[ 3523]
   bw (  KiB/s): min=166454, max=229487, per=100.00%, avg=189940.76, stdev=1335.65, samples=952
   iops        : min=41613, max=57370, avg=47484.92, stdev=333.90, samples=952
  lat (usec)   : 4=0.01%, 50=0.11%, 100=0.01%, 250=0.05%, 500=0.24%
  lat (usec)   : 750=0.96%, 1000=24.59%
  lat (msec)   : 2=73.84%, 4=0.20%, 10=0.01%, 20=0.01%
  cpu          : usr=1.61%, sys=19.29%, ctx=2967938, majf=0, minf=144
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2848971,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=185MiB/s (194MB/s), 185MiB/s-185MiB/s (194MB/s-194MB/s), io=10.9GiB (11.7GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1147MiB/s][w=294k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=24176: Fri Apr  5 19:25:17 2024
  write: IOPS=292k, BW=1139MiB/s (1195MB/s)(66.8GiB/60022msec); 0 zone resets
    slat (usec): min=3, max=35037, avg=26.14, stdev=138.03
    clat (usec): min=2, max=35173, avg=192.84, stdev=370.77
     lat (usec): min=39, max=35198, avg=219.10, stdev=396.00
    clat percentiles (usec):
     |  1.00th=[   60],  5.00th=[   79], 10.00th=[   92], 20.00th=[  109],
     | 30.00th=[  120], 40.00th=[  133], 50.00th=[  149], 60.00th=[  167],
     | 70.00th=[  188], 80.00th=[  225], 90.00th=[  379], 95.00th=[  441],
     | 99.00th=[  515], 99.50th=[  562], 99.90th=[ 3916], 99.95th=[ 7242],
     | 99.99th=[21103]
   bw (  MiB/s): min= 1085, max= 1193, per=99.99%, avg=1139.14, stdev= 3.03, samples=960
   iops        : min=277826, max=305650, avg=291618.92, stdev=776.14, samples=960
  lat (usec)   : 4=0.01%, 10=0.01%, 50=0.40%, 100=13.96%, 250=68.86%
  lat (usec)   : 500=15.40%, 750=1.15%, 1000=0.07%
  lat (msec)   : 2=0.05%, 4=0.01%, 10=0.07%, 20=0.02%, 50=0.01%
  cpu          : usr=5.82%, sys=58.41%, ctx=7708115, majf=0, minf=100
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,17505272,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1139MiB/s (1195MB/s), 1139MiB/s-1139MiB/s (1195MB/s-1195MB/s), io=66.8GiB (71.7GB), run=60022-60022msec
#Koxia CM6 RaidZ1 4 namespaces sync always

root@quantastor:/# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n2     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n3     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n4     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n1     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n2     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n3     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n4     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n1     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n2     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n3     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n4     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n1     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n2     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n3     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n4     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0

root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# zpool status -L
  pool: qs-1612333a-0e2f-ba76-d799-5b43a58e643b
 state: ONLINE
config:

        NAME                                     STATE     READ WRITE CKSUM
        qs-1612333a-0e2f-ba76-d799-5b43a58e643b  ONLINE       0     0     0
          raidz1-0                               ONLINE       0     0     0
            nvme0n1                              ONLINE       0     0     0
            nvme1n1                              ONLINE       0     0     0
            nvme2n1                              ONLINE       0     0     0
            nvme3n1                              ONLINE       0     0     0
          raidz1-1                               ONLINE       0     0     0
            nvme3n2                              ONLINE       0     0     0
            nvme1n2                              ONLINE       0     0     0
            nvme0n2                              ONLINE       0     0     0
            nvme2n2                              ONLINE       0     0     0
          raidz1-2                               ONLINE       0     0     0
            nvme2n3                              ONLINE       0     0     0
            nvme0n3                              ONLINE       0     0     0
            nvme1n3                              ONLINE       0     0     0
            nvme3n3                              ONLINE       0     0     0
          raidz1-3                               ONLINE       0     0     0
            nvme3n4                              ONLINE       0     0     0
            nvme0n4                              ONLINE       0     0     0
            nvme2n4                              ONLINE       0     0     0
            nvme1n4                              ONLINE       0     0     0

errors: No known data errors

root@quantastor:/# zpool status -v
  pool: qs-1612333a-0e2f-ba76-d799-5b43a58e643b
 state: ONLINE
config:

        NAME                                           STATE     READ WRITE CKSUM
        qs-1612333a-0e2f-ba76-d799-5b43a58e643b        ONLINE       0     0     0
          raidz1-0                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d201  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a201  ONLINE       0     0     0
          raidz1-1                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a202  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6802  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d202  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b802  ONLINE       0     0     0
          raidz1-2                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b803  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d203  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6803  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a203  ONLINE       0     0     0
          raidz1-3                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a204  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d204  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b804  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6804  ONLINE       0     0     0

errors: No known data errors

root@quantastor:/# zpool iostat -v
                                                 capacity     operations     bandwidth
pool                                           alloc   free   read  write   read  write
---------------------------------------------  -----  -----  -----  -----  -----  -----
qs-1612333a-0e2f-ba76-d799-5b43a58e643b        5.32M  13.9T      0     21  1.03K   228K
  raidz1-0                                     1.52M  3.48T      0     11    262  90.4K
    nvme-eui.00000000000000008ce38ee20af6d201      -      -      0      2     65  22.8K
    nvme-eui.00000000000000008ce38ee20d6b6801      -      -      0      2     65  22.6K
    nvme-eui.00000000000000008ce38ee20af7b801      -      -      0      2     65  22.6K
    nvme-eui.00000000000000008ce38ee20af7a201      -      -      0      2     65  22.4K
  raidz1-1                                      456K  3.48T      0      5    321  71.6K
    nvme-eui.00000000000000008ce38ee20af7a202      -      -      0      1     80  18.2K
    nvme-eui.00000000000000008ce38ee20d6b6802      -      -      0      1     80  18.2K
    nvme-eui.00000000000000008ce38ee20af6d202      -      -      0      1     80  17.6K
    nvme-eui.00000000000000008ce38ee20af7b802      -      -      0      1     80  17.5K
  raidz1-2                                     3.24M  3.48T      0      4    385  70.6K
    nvme-eui.00000000000000008ce38ee20af7b803      -      -      0      1     96  17.6K
    nvme-eui.00000000000000008ce38ee20af6d203      -      -      0      1     96  17.3K
    nvme-eui.00000000000000008ce38ee20d6b6803      -      -      0      1     96  17.8K
    nvme-eui.00000000000000008ce38ee20af7a203      -      -      0      1     96  17.8K
  raidz1-3                                      112K  3.48T      0      3    481  56.3K
    nvme-eui.00000000000000008ce38ee20af7a204      -      -      0      0    120  14.1K
    nvme-eui.00000000000000008ce38ee20af6d204      -      -      0      0    120  14.1K
    nvme-eui.00000000000000008ce38ee20af7b804      -      -      0      0    120  14.1K
    nvme-eui.00000000000000008ce38ee20d6b6804      -      -      0      0    120  14.1K
	
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# zpool iostat -v
                                                 capacity     operations     bandwidth
pool                                           alloc   free   read  write   read  write
---------------------------------------------  -----  -----  -----  -----  -----  -----
qs-1612333a-0e2f-ba76-d799-5b43a58e643b         136G  13.8T  26.7K  21.0K  1.11G   840M
  raidz1-0                                     34.1G  3.45T  6.69K  5.25K   284M   210M
    nvme-eui.00000000000000008ce38ee20af6d201      -      -  1.64K  1.31K  71.9M  53.5M
    nvme-eui.00000000000000008ce38ee20d6b6801      -      -  1.65K  1.31K  68.0M  51.8M
    nvme-eui.00000000000000008ce38ee20af7b801      -      -  1.70K  1.32K  74.6M  53.5M
    nvme-eui.00000000000000008ce38ee20af7a201      -      -  1.70K  1.31K  69.9M  51.5M
  raidz1-1                                     34.1G  3.45T  6.90K  5.41K   293M   216M
    nvme-eui.00000000000000008ce38ee20af7a202      -      -  1.78K  1.36K  77.9M  55.1M
    nvme-eui.00000000000000008ce38ee20d6b6802      -      -  1.78K  1.35K  73.1M  53.2M
    nvme-eui.00000000000000008ce38ee20af6d202      -      -  1.67K  1.35K  73.1M  55.1M
    nvme-eui.00000000000000008ce38ee20af7b802      -      -  1.67K  1.35K  68.9M  53.0M
  raidz1-2                                     34.0G  3.45T  7.04K  5.53K   299M   221M
    nvme-eui.00000000000000008ce38ee20af7b803      -      -  1.78K  1.39K  77.9M  56.3M
    nvme-eui.00000000000000008ce38ee20af6d203      -      -  1.78K  1.38K  73.3M  54.3M
    nvme-eui.00000000000000008ce38ee20d6b6803      -      -  1.74K  1.38K  76.2M  56.3M
    nvme-eui.00000000000000008ce38ee20af7a203      -      -  1.74K  1.38K  71.8M  54.2M
  raidz1-3                                     34.0G  3.45T  7.21K  5.66K   306M   227M
    nvme-eui.00000000000000008ce38ee20af7a204      -      -  1.74K  1.42K  76.2M  57.8M
    nvme-eui.00000000000000008ce38ee20af6d204      -      -  1.75K  1.41K  72.3M  55.8M
    nvme-eui.00000000000000008ce38ee20af7b804      -      -  1.85K  1.42K  81.2M  57.8M
    nvme-eui.00000000000000008ce38ee20d6b6804      -      -  1.86K  1.41K  76.3M  55.5M

root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# zfs get all qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe
NAME                                          PROPERTY              VALUE                                                            SOURCE
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  type                  filesystem                                                       -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  creation              Fri Apr  5 21:55 2024                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  used                  97.4G                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  available             9.87T                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  referenced            97.4G                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  compressratio         1.02x                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  mounted               yes                                                              -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  quota                 none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  reservation           none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  recordsize            128K                                                             local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  mountpoint            /mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  inherited from qs-1612333a-0e2f-ba76-d799-5b43a58e643b
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  sharenfs              off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  checksum              on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  compression           on                                                               inherited from qs-1612333a-0e2f-ba76-d799-5b43a58e643b
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  atime                 off                                                              local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  devices               on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  exec                  on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  setuid                on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  readonly              off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  zoned                 off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  snapdir               hidden                                                           default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  aclmode               discard                                                          default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  aclinherit            restricted                                                       default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  createtxg             87                                                               -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  canmount              on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  xattr                 sa                                                               local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  copies                1                                                                default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  version               5                                                                -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  utf8only              off                                                              -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  normalization         none                                                             -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  casesensitivity       sensitive                                                        -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  vscan                 off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  nbmand                off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  sharesmb              off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  refquota              none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  refreservation        none                                                             local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  guid                  6553702475503833972                                              -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  primarycache          metadata                                                         local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  secondarycache        metadata                                                         local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  usedbysnapshots       0B                                                               -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  usedbydataset         97.4G                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  usedbychildren        0B                                                               -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  usedbyrefreservation  0B                                                               -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  logbias               latency                                                          default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  objsetid              907                                                              -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  dedup                 off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  mlslabel              none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  sync                  always                                                           local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  dnodesize             legacy                                                           default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  refcompressratio      1.02x                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  written               97.4G                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  logicalused           100G                                                             -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  logicalreferenced     100G                                                             -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  volmode               default                                                          default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  filesystem_limit      none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  snapshot_limit        none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  filesystem_count      none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  snapshot_count        none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  snapdev               hidden                                                           default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  acltype               posix                                                            inherited from qs-1612333a-0e2f-ba76-d799-5b43a58e643b
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  context               none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  fscontext             none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  defcontext            none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  rootcontext           none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  relatime              off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  redundant_metadata    all                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  overlay               on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  encryption            off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  keylocation           none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  keyformat             none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  pbkdf2iters           0                                                                default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  special_small_blocks  0                                                                default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  quantastor:shareid    64c83783-7d77-114b-9d70-182ff010627b                             local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  quantastor:name       RZ1-NS-4                                                         inherited from qs-1612333a-0e2f-ba76-d799-5b43a58e643b

root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=1279MiB/s,w=1326MiB/s][r=1279,w=1326 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3297187: Fri Apr  5 22:17:57 2024
  read: IOPS=1344, BW=1345MiB/s (1410MB/s)(78.8GiB/60004msec)
    slat (usec): min=207, max=13391, avg=2665.72, stdev=1300.29
    clat (usec): min=4, max=43312, avg=21007.47, stdev=3640.35
     lat (usec): min=3577, max=46837, avg=23674.14, stdev=3988.67
    clat percentiles (usec):
     |  1.00th=[13960],  5.00th=[15664], 10.00th=[16712], 20.00th=[17957],
     | 30.00th=[19006], 40.00th=[19792], 50.00th=[20579], 60.00th=[21627],
     | 70.00th=[22676], 80.00th=[23725], 90.00th=[25822], 95.00th=[27395],
     | 99.00th=[31327], 99.50th=[32900], 99.90th=[35914], 99.95th=[37487],
     | 99.99th=[40109]
   bw (  MiB/s): min= 1053, max= 1676, per=99.91%, avg=1343.78, stdev=14.87, samples=960
   iops        : min= 1053, max= 1676, avg=1342.95, stdev=14.87, samples=960
  write: IOPS=1345, BW=1345MiB/s (1410MB/s)(78.8GiB/60004msec); 0 zone resets
    slat (usec): min=982, max=18027, avg=3269.06, stdev=1102.48
    clat (usec): min=3, max=45710, avg=20623.81, stdev=3685.43
     lat (usec): min=2371, max=49727, avg=23893.85, stdev=3960.11
    clat percentiles (usec):
     |  1.00th=[13566],  5.00th=[15270], 10.00th=[16319], 20.00th=[17433],
     | 30.00th=[18482], 40.00th=[19268], 50.00th=[20317], 60.00th=[21103],
     | 70.00th=[22152], 80.00th=[23462], 90.00th=[25560], 95.00th=[27395],
     | 99.00th=[31065], 99.50th=[32637], 99.90th=[35914], 99.95th=[36963],
     | 99.99th=[41681]
   bw (  MiB/s): min= 1054, max= 1642, per=99.92%, avg=1344.02, stdev=13.93, samples=960
   iops        : min= 1054, max= 1642, avg=1343.23, stdev=13.92, samples=960
  lat (usec)   : 4=0.01%, 10=0.01%
  lat (msec)   : 4=0.01%, 10=0.01%, 20=44.65%, 50=55.33%
  cpu          : usr=1.23%, sys=14.62%, ctx=805422, majf=0, minf=83
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=80701,80713,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=1345MiB/s (1410MB/s), 1345MiB/s-1345MiB/s (1410MB/s-1410MB/s), io=78.8GiB (84.6GB), run=60004-60004msec
  WRITE: bw=1345MiB/s (1410MB/s), 1345MiB/s-1345MiB/s (1410MB/s-1410MB/s), io=78.8GiB (84.6GB), run=60004-60004msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=8972MiB/s][r=8972 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3867023: Fri Apr  5 22:19:13 2024
  read: IOPS=9789, BW=9789MiB/s (10.3GB/s)(574GiB/60001msec)
    slat (usec): min=132, max=21253, avg=812.51, stdev=199.13
    clat (usec): min=3, max=27784, avg=5722.38, stdev=1059.55
     lat (usec): min=572, max=28833, avg=6535.74, stdev=1195.14
    clat percentiles (usec):
     |  1.00th=[ 3392],  5.00th=[ 3785], 10.00th=[ 4178], 20.00th=[ 4883],
     | 30.00th=[ 5276], 40.00th=[ 5538], 50.00th=[ 5800], 60.00th=[ 5997],
     | 70.00th=[ 6259], 80.00th=[ 6587], 90.00th=[ 6980], 95.00th=[ 7308],
     | 99.00th=[ 8160], 99.50th=[ 8586], 99.90th=[10159], 99.95th=[11338],
     | 99.99th=[13829]
   bw (  MiB/s): min= 7586, max=16112, per=100.00%, avg=9791.60, stdev=192.41, samples=955
   iops        : min= 7586, max=16112, avg=9791.24, stdev=192.41, samples=955
  lat (usec)   : 4=0.01%, 10=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=7.40%, 10=92.48%, 20=0.11%, 50=0.01%
  cpu          : usr=0.72%, sys=50.26%, ctx=983625, majf=0, minf=16469
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=587371,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=9789MiB/s (10.3GB/s), 9789MiB/s-9789MiB/s (10.3GB/s-10.3GB/s), io=574GiB (616GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1472MiB/s][w=1472 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=663814: Fri Apr  5 22:20:23 2024
  write: IOPS=1512, BW=1513MiB/s (1586MB/s)(88.6GiB/60003msec); 0 zone resets
    slat (usec): min=1152, max=27062, avg=5283.42, stdev=515.27
    clat (usec): min=3, max=60080, avg=37015.66, stdev=2198.13
     lat (usec): min=5607, max=65711, avg=42299.95, stdev=2441.72
    clat percentiles (usec):
     |  1.00th=[30540],  5.00th=[33162], 10.00th=[34341], 20.00th=[35914],
     | 30.00th=[36963], 40.00th=[36963], 50.00th=[37487], 60.00th=[37487],
     | 70.00th=[38011], 80.00th=[38011], 90.00th=[38536], 95.00th=[39060],
     | 99.00th=[42206], 99.50th=[44303], 99.90th=[49021], 99.95th=[51119],
     | 99.99th=[57410]
   bw (  MiB/s): min= 1420, max= 1752, per=99.92%, avg=1511.53, stdev= 7.15, samples=960
   iops        : min= 1420, max= 1752, avg=1511.29, stdev= 7.16, samples=960
  lat (usec)   : 4=0.01%, 10=0.01%
  lat (msec)   : 10=0.01%, 20=0.08%, 50=99.83%, 100=0.07%
  cpu          : usr=1.34%, sys=7.75%, ctx=305244, majf=0, minf=76
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,90767,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1513MiB/s (1586MB/s), 1513MiB/s-1513MiB/s (1586MB/s-1586MB/s), io=88.6GiB (95.2GB), run=60003-60003msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=838MiB/s,w=859MiB/s][r=6702,w=6873 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1039968: Fri Apr  5 22:21:38 2024
  read: IOPS=6788, BW=849MiB/s (890MB/s)(49.7GiB/60001msec)
    slat (usec): min=23, max=8405, avg=625.62, stdev=221.72
    clat (usec): min=2, max=16793, avg=4128.57, stdev=738.29
     lat (usec): min=465, max=17272, avg=4754.74, stdev=826.33
    clat percentiles (usec):
     |  1.00th=[ 3195],  5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3654],
     | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4080],
     | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5342],
     | 99.00th=[ 7242], 99.50th=[ 8029], 99.90th=[10028], 99.95th=[11207],
     | 99.99th=[12649]
   bw (  KiB/s): min=766720, max=983816, per=99.99%, avg=868755.42, stdev=5514.89, samples=952
   iops        : min= 5990, max= 7686, avg=6786.97, stdev=43.09, samples=952
  write: IOPS=6783, BW=848MiB/s (889MB/s)(49.7GiB/60001msec); 0 zone resets
    slat (usec): min=296, max=7972, avg=545.13, stdev=166.22
    clat (usec): min=2, max=16495, avg=4127.09, stdev=729.66
     lat (usec): min=431, max=17508, avg=4672.73, stdev=796.76
    clat percentiles (usec):
     |  1.00th=[ 3195],  5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 3654],
     | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4080],
     | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5342],
     | 99.00th=[ 7177], 99.50th=[ 7963], 99.90th=[10028], 99.95th=[11207],
     | 99.99th=[12649]
   bw (  KiB/s): min=754688, max=978449, per=99.98%, avg=868127.03, stdev=5654.23, samples=952
   iops        : min= 5896, max= 7644, avg=6782.07, stdev=44.18, samples=952
  lat (usec)   : 4=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=52.19%, 10=47.70%, 20=0.10%
  cpu          : usr=1.19%, sys=11.52%, ctx=1926005, majf=0, minf=89
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=407287,407021,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=849MiB/s (890MB/s), 849MiB/s-849MiB/s (890MB/s-890MB/s), io=49.7GiB (53.4GB), run=60001-60001msec
  WRITE: bw=848MiB/s (889MB/s), 848MiB/s-848MiB/s (889MB/s-889MB/s), io=49.7GiB (53.3GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=4728MiB/s][r=37.8k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1595840: Fri Apr  5 22:22:54 2024
  read: IOPS=37.0k, BW=4746MiB/s (4976MB/s)(278GiB/60002msec)
    slat (usec): min=12, max=7071, avg=208.96, stdev=24.71
    clat (usec): min=2, max=10337, avg=1476.01, stdev=69.15
     lat (usec): min=172, max=10531, avg=1685.19, stdev=74.46
    clat percentiles (usec):
     |  1.00th=[ 1369],  5.00th=[ 1401], 10.00th=[ 1418], 20.00th=[ 1434],
     | 30.00th=[ 1450], 40.00th=[ 1467], 50.00th=[ 1467], 60.00th=[ 1483],
     | 70.00th=[ 1500], 80.00th=[ 1500], 90.00th=[ 1532], 95.00th=[ 1549],
     | 99.00th=[ 1631], 99.50th=[ 1680], 99.90th=[ 1991], 99.95th=[ 2376],
     | 99.99th=[ 3195]
   bw (  MiB/s): min= 4608, max= 4874, per=99.99%, avg=4745.11, stdev= 4.74, samples=953
   iops        : min=36864, max=38992, avg=37960.73, stdev=37.90, samples=953
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=99.89%, 4=0.10%, 10=0.01%, 20=0.01%
  cpu          : usr=1.05%, sys=18.10%, ctx=2284864, majf=0, minf=2140
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2277920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=4746MiB/s (4976MB/s), 4746MiB/s-4746MiB/s (4976MB/s-4976MB/s), io=278GiB (299GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1397MiB/s][w=11.2k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1617972: Fri Apr  5 22:24:08 2024
  write: IOPS=10.8k, BW=1351MiB/s (1417MB/s)(79.2GiB/60002msec); 0 zone resets
    slat (usec): min=343, max=13457, avg=737.05, stdev=278.91
    clat (usec): min=2, max=27143, avg=5180.53, stdev=1015.34
     lat (usec): min=646, max=28003, avg=5918.00, stdev=1124.87
    clat percentiles (usec):
     |  1.00th=[ 4113],  5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4424],
     | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 5080],
     | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6128], 95.00th=[ 6194],
     | 99.00th=[ 7308], 99.50th=[10421], 99.90th=[16188], 99.95th=[16909],
     | 99.99th=[18482]
   bw (  MiB/s): min= 1134, max= 1631, per=99.95%, avg=1350.81, stdev=14.67, samples=954
   iops        : min= 9072, max=13048, avg=10806.11, stdev=117.36, samples=954
  lat (usec)   : 4=0.01%, 10=0.01%, 750=0.01%
  lat (msec)   : 2=0.01%, 4=0.40%, 10=99.05%, 20=0.54%, 50=0.01%
  cpu          : usr=0.87%, sys=7.69%, ctx=1466089, majf=0, minf=94
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,648735,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1351MiB/s (1417MB/s), 1351MiB/s-1351MiB/s (1417MB/s-1417MB/s), io=79.2GiB (85.0GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=31.7MiB/s,w=32.2MiB/s][r=8106,w=8232 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2030077: Fri Apr  5 22:25:21 2024
  read: IOPS=7765, BW=30.3MiB/s (31.8MB/s)(1820MiB/60001msec)
    slat (usec): min=5, max=195843, avg=344.05, stdev=1134.21
    clat (usec): min=2, max=210496, avg=3600.32, stdev=6340.14
     lat (usec): min=191, max=210785, avg=3944.85, stdev=6698.06
    clat percentiles (usec):
     |  1.00th=[  1909],  5.00th=[  2180], 10.00th=[  2311], 20.00th=[  2507],
     | 30.00th=[  2638], 40.00th=[  2802], 50.00th=[  2933], 60.00th=[  3064],
     | 70.00th=[  3261], 80.00th=[  3523], 90.00th=[  4424], 95.00th=[  6128],
     | 99.00th=[ 11338], 99.50th=[ 19530], 99.90th=[122160], 99.95th=[168821],
     | 99.99th=[198181]
   bw (  KiB/s): min=21655, max=42264, per=99.78%, avg=30991.86, stdev=606.38, samples=952
   iops        : min= 5413, max=10566, avg=7747.45, stdev=151.59, samples=952
  write: IOPS=7764, BW=30.3MiB/s (31.8MB/s)(1820MiB/60001msec); 0 zone resets
    slat (usec): min=76, max=197019, avg=677.26, stdev=2077.99
    clat (usec): min=2, max=210546, avg=3615.33, stdev=6378.90
     lat (usec): min=485, max=211269, avg=4293.26, stdev=7126.43
    clat percentiles (usec):
     |  1.00th=[  1926],  5.00th=[  2180], 10.00th=[  2343], 20.00th=[  2507],
     | 30.00th=[  2671], 40.00th=[  2802], 50.00th=[  2933], 60.00th=[  3097],
     | 70.00th=[  3261], 80.00th=[  3556], 90.00th=[  4424], 95.00th=[  6194],
     | 99.00th=[ 11600], 99.50th=[ 19792], 99.90th=[123208], 99.95th=[170918],
     | 99.99th=[198181]
   bw (  KiB/s): min=21880, max=40840, per=99.75%, avg=30978.87, stdev=580.78, samples=952
   iops        : min= 5470, max=10210, avg=7744.22, stdev=145.18, samples=952
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=1.77%, 4=85.61%, 10=11.34%, 20=0.79%, 50=0.25%
  lat (msec)   : 100=0.10%, 250=0.14%
  cpu          : usr=0.89%, sys=14.95%, ctx=2984564, majf=0, minf=111
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=465924,465864,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=1820MiB (1908MB), run=60001-60001msec
  WRITE: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=1820MiB (1908MB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=182MiB/s][r=46.6k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2894507: Fri Apr  5 22:26:33 2024
  read: IOPS=46.7k, BW=182MiB/s (191MB/s)(10.7GiB/60001msec)
    slat (usec): min=2, max=6121, avg=169.31, stdev=44.50
    clat (usec): min=2, max=7143, avg=1201.61, stdev=216.29
     lat (usec): min=30, max=7305, avg=1371.18, stdev=242.39
    clat percentiles (usec):
     |  1.00th=[  783],  5.00th=[  873], 10.00th=[  906], 20.00th=[  979],
     | 30.00th=[ 1074], 40.00th=[ 1188], 50.00th=[ 1254], 60.00th=[ 1287],
     | 70.00th=[ 1319], 80.00th=[ 1369], 90.00th=[ 1418], 95.00th=[ 1483],
     | 99.00th=[ 1680], 99.50th=[ 1778], 99.90th=[ 2376], 99.95th=[ 2704],
     | 99.99th=[ 3490]
   bw (  KiB/s): min=155384, max=212707, per=99.98%, avg=186560.88, stdev=1227.66, samples=952
   iops        : min=38846, max=53176, avg=46639.98, stdev=306.92, samples=952
  lat (usec)   : 4=0.01%, 50=0.09%, 100=0.01%, 250=0.04%, 500=0.13%
  lat (usec)   : 750=0.56%, 1000=22.02%
  lat (msec)   : 2=76.95%, 4=0.19%, 10=0.01%
  cpu          : usr=1.57%, sys=20.24%, ctx=2937303, majf=0, minf=148
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2799054,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=182MiB/s (191MB/s), 182MiB/s-182MiB/s (191MB/s-191MB/s), io=10.7GiB (11.5GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=201MiB/s][w=51.4k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=4184460: Fri Apr  5 22:27:45 2024
  write: IOPS=51.7k, BW=202MiB/s (212MB/s)(11.8GiB/60001msec); 0 zone resets
    slat (usec): min=70, max=53405, avg=152.84, stdev=309.04
    clat (usec): min=2, max=59536, avg=1084.66, stdev=859.80
     lat (usec): min=115, max=59788, avg=1237.75, stdev=921.85
    clat percentiles (usec):
     |  1.00th=[  775],  5.00th=[  824], 10.00th=[  857], 20.00th=[  898],
     | 30.00th=[  938], 40.00th=[  979], 50.00th=[ 1020], 60.00th=[ 1057],
     | 70.00th=[ 1106], 80.00th=[ 1172], 90.00th=[ 1270], 95.00th=[ 1385],
     | 99.00th=[ 1729], 99.50th=[ 2474], 99.90th=[10552], 99.95th=[12125],
     | 99.99th=[49021]
   bw (  KiB/s): min=177616, max=234181, per=99.99%, avg=206707.03, stdev=1562.20, samples=952
   iops        : min=44404, max=58545, avg=51676.51, stdev=390.55, samples=952
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.20%, 1000=45.47%
  lat (msec)   : 2=53.72%, 4=0.16%, 10=0.33%, 20=0.10%, 50=0.01%
  lat (msec)   : 100=0.01%
  cpu          : usr=1.64%, sys=23.89%, ctx=8685675, majf=0, minf=88
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,3100824,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=202MiB/s (212MB/s), 202MiB/s-202MiB/s (212MB/s-212MB/s), io=11.8GiB (12.7GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe#
#Koxia CM6 RaidZ1 sync always

root@quantastor:/# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         844.35  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme1n1     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         836.78  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme2n1     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              1         851.14  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme3n1     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         842.81  GB /   3.84  TB    512   B +  0 B   2.2.0

  pool: qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5
 state: ONLINE
config:

        NAME                                           STATE     READ WRITE CKSUM
        qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5        ONLINE       0     0     0
          raidz1-0                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d201  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a201  ONLINE       0     0     0

errors: No known data errors

root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# zfs get all qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe
NAME                                          PROPERTY              VALUE                                                            SOURCE
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  type                  filesystem                                                       -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  creation              Fri Apr  5 17:19 2024                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  used                  97.4G                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  available             9.86T                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  referenced            97.4G                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  compressratio         1.02x                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  mounted               yes                                                              -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  quota                 none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  reservation           none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  recordsize            128K                                                             local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  mountpoint            /mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  inherited from qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  sharenfs              off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  checksum              on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  compression           on                                                               inherited from qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  atime                 off                                                              local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  devices               on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  exec                  on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  setuid                on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  readonly              off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  zoned                 off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  snapdir               hidden                                                           default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  aclmode               discard                                                          default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  aclinherit            restricted                                                       default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  createtxg             35                                                               -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  canmount              on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  xattr                 sa                                                               local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  copies                1                                                                default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  version               5                                                                -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  utf8only              off                                                              -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  normalization         none                                                             -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  casesensitivity       sensitive                                                        -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  vscan                 off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  nbmand                off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  sharesmb              off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  refquota              none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  refreservation        none                                                             local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  guid                  15109964299467543167                                             -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  primarycache          metadata                                                         local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  secondarycache        metadata                                                         local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  usedbysnapshots       0B                                                               -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  usedbydataset         97.4G                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  usedbychildren        0B                                                               -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  usedbyrefreservation  0B                                                               -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  logbias               latency                                                          default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  objsetid              164                                                              -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  dedup                 off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  mlslabel              none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  sync                  always                                                           local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  dnodesize             legacy                                                           default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  refcompressratio      1.02x                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  written               97.4G                                                            -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  logicalused           100G                                                             -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  logicalreferenced     100G                                                             -
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  volmode               default                                                          default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  filesystem_limit      none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  snapshot_limit        none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  filesystem_count      none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  snapshot_count        none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  snapdev               hidden                                                           default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  acltype               posix                                                            inherited from qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  context               none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  fscontext             none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  defcontext            none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  rootcontext           none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  relatime              off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  redundant_metadata    all                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  overlay               on                                                               default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  encryption            off                                                              default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  keylocation           none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  keyformat             none                                                             default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  pbkdf2iters           0                                                                default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  special_small_blocks  0                                                                default
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  quantastor:shareid    515c12c1-bca4-cd8c-2e3c-89dcd5b6efea                             local
qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe  quantastor:name       NVMe-Test-No-NameSpaces                                          inherited from qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5

root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=1331MiB/s,w=1363MiB/s][r=1331,w=1363 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=46975: Fri Apr  5 19:32:57 2024
  read: IOPS=1351, BW=1351MiB/s (1417MB/s)(79.2GiB/60003msec)
    slat (usec): min=213, max=13508, avg=2660.64, stdev=1289.53
    clat (usec): min=4, max=49112, avg=20919.09, stdev=3639.66
     lat (usec): min=4286, max=51340, avg=23580.66, stdev=3982.95
    clat percentiles (usec):
     |  1.00th=[13829],  5.00th=[15664], 10.00th=[16581], 20.00th=[17957],
     | 30.00th=[18744], 40.00th=[19792], 50.00th=[20579], 60.00th=[21365],
     | 70.00th=[22414], 80.00th=[23725], 90.00th=[25560], 95.00th=[27395],
     | 99.00th=[31065], 99.50th=[32637], 99.90th=[36439], 99.95th=[38011],
     | 99.99th=[43254]
   bw (  MiB/s): min= 1043, max= 1636, per=99.95%, avg=1350.70, stdev=14.60, samples=960
   iops        : min= 1043, max= 1636, avg=1350.43, stdev=14.60, samples=960
  write: IOPS=1351, BW=1351MiB/s (1417MB/s)(79.2GiB/60003msec); 0 zone resets
    slat (usec): min=997, max=16840, avg=3246.18, stdev=1090.28
    clat (usec): min=3, max=43767, avg=20517.13, stdev=3646.64
     lat (usec): min=3853, max=51728, avg=23764.28, stdev=3922.70
    clat percentiles (usec):
     |  1.00th=[13435],  5.00th=[15139], 10.00th=[16188], 20.00th=[17433],
     | 30.00th=[18482], 40.00th=[19268], 50.00th=[20317], 60.00th=[21103],
     | 70.00th=[22152], 80.00th=[23462], 90.00th=[25297], 95.00th=[26870],
     | 99.00th=[30802], 99.50th=[32375], 99.90th=[35914], 99.95th=[37487],
     | 99.99th=[41157]
   bw (  MiB/s): min= 1072, max= 1646, per=99.95%, avg=1350.65, stdev=13.71, samples=960
   iops        : min= 1072, max= 1646, avg=1350.38, stdev=13.71, samples=960
  lat (usec)   : 4=0.01%, 10=0.01%, 50=0.01%
  lat (msec)   : 4=0.01%, 10=0.02%, 20=45.55%, 50=54.43%
  cpu          : usr=1.24%, sys=14.45%, ctx=781428, majf=0, minf=101
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=81086,81087,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=1351MiB/s (1417MB/s), 1351MiB/s-1351MiB/s (1417MB/s-1417MB/s), io=79.2GiB (85.0GB), run=60003-60003msec
  WRITE: bw=1351MiB/s (1417MB/s), 1351MiB/s-1351MiB/s (1417MB/s-1417MB/s), io=79.2GiB (85.0GB), run=60003-60003msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=9045MiB/s][r=9045 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=609969: Fri Apr  5 19:34:12 2024
  read: IOPS=9961, BW=9961MiB/s (10.4GB/s)(584GiB/60001msec)
    slat (usec): min=139, max=9784, avg=798.54, stdev=184.21
    clat (usec): min=3, max=18782, avg=5623.85, stdev=950.54
     lat (usec): min=818, max=19885, avg=6423.17, stdev=1070.23
    clat percentiles (usec):
     |  1.00th=[ 3556],  5.00th=[ 4047], 10.00th=[ 4359], 20.00th=[ 4817],
     | 30.00th=[ 5145], 40.00th=[ 5407], 50.00th=[ 5669], 60.00th=[ 5866],
     | 70.00th=[ 6128], 80.00th=[ 6390], 90.00th=[ 6783], 95.00th=[ 7111],
     | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[ 9634], 99.95th=[10290],
     | 99.99th=[13042]
   bw (  MiB/s): min= 7888, max=14984, per=100.00%, avg=9960.73, stdev=159.31, samples=956
   iops        : min= 7888, max=14984, avg=9960.08, stdev=159.34, samples=956
  lat (usec)   : 4=0.01%, 10=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=4.35%, 10=95.58%, 20=0.07%
  cpu          : usr=0.72%, sys=49.20%, ctx=996187, majf=0, minf=16482
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=597671,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=9961MiB/s (10.4GB/s), 9961MiB/s-9961MiB/s (10.4GB/s-10.4GB/s), io=584GiB (627GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1478MiB/s][w=1478 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1593338: Fri Apr  5 19:35:24 2024
  write: IOPS=1531, BW=1532MiB/s (1606MB/s)(89.8GiB/60006msec); 0 zone resets
    slat (usec): min=1499, max=23150, avg=5216.90, stdev=566.36
    clat (usec): min=3, max=58504, avg=36549.16, stdev=2871.83
     lat (usec): min=5234, max=64020, avg=41766.94, stdev=3230.65
    clat percentiles (usec):
     |  1.00th=[26084],  5.00th=[32375], 10.00th=[34341], 20.00th=[36439],
     | 30.00th=[36963], 40.00th=[36963], 50.00th=[36963], 60.00th=[37487],
     | 70.00th=[37487], 80.00th=[37487], 90.00th=[38011], 95.00th=[38536],
     | 99.00th=[42206], 99.50th=[44303], 99.90th=[49546], 99.95th=[50594],
     | 99.99th=[54789]
   bw (  MiB/s): min= 1446, max= 2440, per=99.90%, avg=1530.39, stdev=13.29, samples=960
   iops        : min= 1445, max= 2440, avg=1529.96, stdev=13.30, samples=960
  lat (usec)   : 4=0.01%, 10=0.01%
  lat (msec)   : 10=0.01%, 20=0.75%, 50=99.16%, 100=0.07%
  cpu          : usr=1.46%, sys=7.63%, ctx=321009, majf=0, minf=92
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,91929,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1532MiB/s (1606MB/s), 1532MiB/s-1532MiB/s (1606MB/s-1606MB/s), io=89.8GiB (96.4GB), run=60006-60006msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=868MiB/s,w=858MiB/s][r=6943,w=6864 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1956442: Fri Apr  5 19:36:39 2024
  read: IOPS=6884, BW=861MiB/s (902MB/s)(50.4GiB/60001msec)
    slat (usec): min=24, max=11658, avg=618.07, stdev=218.00
    clat (usec): min=3, max=19111, avg=4070.28, stdev=715.47
     lat (usec): min=561, max=19808, avg=4688.87, stdev=800.26
    clat percentiles (usec):
     |  1.00th=[ 3163],  5.00th=[ 3359], 10.00th=[ 3458], 20.00th=[ 3621],
     | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3916], 60.00th=[ 4047],
     | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5211],
     | 99.00th=[ 7046], 99.50th=[ 7767], 99.90th=[ 9634], 99.95th=[10683],
     | 99.99th=[14877]
   bw (  KiB/s): min=776066, max=991744, per=99.98%, avg=880965.94, stdev=5149.14, samples=957
   iops        : min= 6062, max= 7748, avg=6882.26, stdev=40.25, samples=957
  write: IOPS=6879, BW=860MiB/s (902MB/s)(50.4GiB/60001msec); 0 zone resets
    slat (usec): min=293, max=15351, avg=536.19, stdev=162.81
    clat (usec): min=2, max=19109, avg=4070.32, stdev=708.34
     lat (usec): min=438, max=20570, avg=4607.02, stdev=772.39
    clat percentiles (usec):
     |  1.00th=[ 3163],  5.00th=[ 3359], 10.00th=[ 3490], 20.00th=[ 3621],
     | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3916], 60.00th=[ 4047],
     | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5211],
     | 99.00th=[ 6980], 99.50th=[ 7767], 99.90th=[ 9765], 99.95th=[10683],
     | 99.99th=[14484]
   bw (  KiB/s): min=771651, max=988416, per=99.97%, avg=880260.67, stdev=5598.60, samples=957
   iops        : min= 6028, max= 7722, avg=6876.76, stdev=43.76, samples=957
  lat (usec)   : 4=0.01%, 10=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=56.68%, 10=43.24%, 20=0.08%
  cpu          : usr=1.20%, sys=11.31%, ctx=1936932, majf=0, minf=90
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=413063,412764,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=861MiB/s (902MB/s), 861MiB/s-861MiB/s (902MB/s-902MB/s), io=50.4GiB (54.1GB), run=60001-60001msec
  WRITE: bw=860MiB/s (902MB/s), 860MiB/s-860MiB/s (902MB/s-902MB/s), io=50.4GiB (54.1GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=4844MiB/s][r=38.8k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2509271: Fri Apr  5 19:37:52 2024
  read: IOPS=38.4k, BW=4802MiB/s (5036MB/s)(281GiB/60002msec)
    slat (usec): min=13, max=9237, avg=206.51, stdev=23.35
    clat (usec): min=2, max=10583, avg=1458.53, stdev=66.14
     lat (usec): min=211, max=10787, avg=1665.24, stdev=71.46
    clat percentiles (usec):
     |  1.00th=[ 1336],  5.00th=[ 1385], 10.00th=[ 1401], 20.00th=[ 1418],
     | 30.00th=[ 1434], 40.00th=[ 1450], 50.00th=[ 1450], 60.00th=[ 1467],
     | 70.00th=[ 1467], 80.00th=[ 1483], 90.00th=[ 1516], 95.00th=[ 1532],
     | 99.00th=[ 1614], 99.50th=[ 1663], 99.90th=[ 2024], 99.95th=[ 2573],
     | 99.99th=[ 2966]
   bw (  MiB/s): min= 4728, max= 5071, per=99.99%, avg=4801.77, stdev= 5.23, samples=955
   iops        : min=37824, max=40572, avg=38414.00, stdev=41.82, samples=955
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=99.89%, 4=0.10%, 10=0.01%, 20=0.01%
  cpu          : usr=1.13%, sys=18.04%, ctx=2313037, majf=0, minf=2149
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2305216,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=4802MiB/s (5036MB/s), 4802MiB/s-4802MiB/s (5036MB/s-5036MB/s), io=281GiB (302GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1271MiB/s][w=10.2k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2532361: Fri Apr  5 19:39:12 2024
  write: IOPS=11.0k, BW=1375MiB/s (1442MB/s)(80.6GiB/60001msec); 0 zone resets
    slat (usec): min=385, max=12861, avg=724.30, stdev=277.13
    clat (usec): min=2, max=19016, avg=5091.36, stdev=1008.23
     lat (usec): min=456, max=19814, avg=5816.05, stdev=1117.13
    clat percentiles (usec):
     |  1.00th=[ 4080],  5.00th=[ 4228], 10.00th=[ 4293], 20.00th=[ 4359],
     | 30.00th=[ 4424], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4948],
     | 70.00th=[ 5866], 80.00th=[ 5932], 90.00th=[ 6063], 95.00th=[ 6128],
     | 99.00th=[ 7308], 99.50th=[10421], 99.90th=[15795], 99.95th=[16450],
     | 99.99th=[17433]
   bw (  MiB/s): min= 1160, max= 1611, per=99.98%, avg=1374.95, stdev=15.71, samples=958
   iops        : min= 9280, max=12888, avg=10999.27, stdev=125.63, samples=958
  lat (usec)   : 4=0.01%, 10=0.01%, 500=0.01%, 750=0.01%
  lat (msec)   : 2=0.01%, 4=0.57%, 10=98.89%, 20=0.54%
  cpu          : usr=0.99%, sys=7.54%, ctx=1476008, majf=0, minf=85
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,660107,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1375MiB/s (1442MB/s), 1375MiB/s-1375MiB/s (1442MB/s-1442MB/s), io=80.6GiB (86.5GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=28.8MiB/s,w=29.7MiB/s][r=7368,w=7610 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2941030: Fri Apr  5 19:40:29 2024
  read: IOPS=7843, BW=30.6MiB/s (32.1MB/s)(1838MiB/60001msec)
    slat (usec): min=5, max=192100, avg=339.72, stdev=1127.39
    clat (usec): min=2, max=204985, avg=3565.12, stdev=6010.09
     lat (usec): min=335, max=205433, avg=3905.28, stdev=6351.35
    clat percentiles (usec):
     |  1.00th=[  1893],  5.00th=[  2147], 10.00th=[  2278], 20.00th=[  2474],
     | 30.00th=[  2606], 40.00th=[  2737], 50.00th=[  2868], 60.00th=[  3032],
     | 70.00th=[  3195], 80.00th=[  3458], 90.00th=[  4293], 95.00th=[  6259],
     | 99.00th=[ 13173], 99.50th=[ 23200], 99.90th=[109577], 99.95th=[158335],
     | 99.99th=[193987]
   bw (  KiB/s): min=21976, max=42536, per=99.93%, avg=31351.26, stdev=630.86, samples=952
   iops        : min= 5494, max=10634, avg=7837.49, stdev=157.70, samples=952
  write: IOPS=7843, BW=30.6MiB/s (32.1MB/s)(1838MiB/60001msec); 0 zone resets
    slat (usec): min=76, max=193388, avg=671.34, stdev=1892.67
    clat (usec): min=2, max=207467, avg=3578.09, stdev=6118.03
     lat (usec): min=419, max=209445, avg=4250.04, stdev=6851.63
    clat percentiles (usec):
     |  1.00th=[  1893],  5.00th=[  2147], 10.00th=[  2278], 20.00th=[  2474],
     | 30.00th=[  2606], 40.00th=[  2769], 50.00th=[  2900], 60.00th=[  3032],
     | 70.00th=[  3195], 80.00th=[  3490], 90.00th=[  4293], 95.00th=[  6259],
     | 99.00th=[ 12911], 99.50th=[ 22414], 99.90th=[112722], 99.95th=[156238],
     | 99.99th=[193987]
   bw (  KiB/s): min=22666, max=41710, per=99.92%, avg=31349.20, stdev=612.22, samples=952
   iops        : min= 5666, max=10427, avg=7836.95, stdev=153.05, samples=952
  lat (usec)   : 4=0.01%, 10=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=2.22%, 4=86.01%, 10=10.38%, 20=0.83%, 50=0.33%
  lat (msec)   : 100=0.13%, 250=0.11%
  cpu          : usr=0.83%, sys=14.68%, ctx=2940807, majf=0, minf=100
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=470617,470637,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=1838MiB (1928MB), run=60001-60001msec
  WRITE: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=1838MiB (1928MB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=180MiB/s][r=46.1k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3816456: Fri Apr  5 19:41:41 2024
  read: IOPS=46.3k, BW=181MiB/s (190MB/s)(10.6GiB/60002msec)
    slat (usec): min=2, max=12222, avg=170.55, stdev=46.70
    clat (usec): min=2, max=13513, avg=1209.89, stdev=216.65
     lat (usec): min=30, max=13698, avg=1380.71, stdev=242.36
    clat percentiles (usec):
     |  1.00th=[  824],  5.00th=[  881], 10.00th=[  914], 20.00th=[  988],
     | 30.00th=[ 1106], 40.00th=[ 1205], 50.00th=[ 1254], 60.00th=[ 1287],
     | 70.00th=[ 1336], 80.00th=[ 1369], 90.00th=[ 1418], 95.00th=[ 1483],
     | 99.00th=[ 1696], 99.50th=[ 1795], 99.90th=[ 2442], 99.95th=[ 2704],
     | 99.99th=[ 3752]
   bw (  KiB/s): min=160120, max=212219, per=99.96%, avg=185237.49, stdev=1049.38, samples=953
   iops        : min=40030, max=53054, avg=46309.18, stdev=262.34, samples=953
  lat (usec)   : 4=0.01%, 50=0.11%, 100=0.01%, 250=0.03%, 500=0.09%
  lat (usec)   : 750=0.44%, 1000=20.73%
  lat (msec)   : 2=78.36%, 4=0.22%, 10=0.01%, 20=0.01%
  cpu          : usr=1.51%, sys=19.73%, ctx=2941573, majf=0, minf=143
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2779838,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=181MiB/s (190MB/s), 181MiB/s-181MiB/s (190MB/s-190MB/s), io=10.6GiB (11.4GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-fca06e45-c340-f4e5-ddfd-210b16dd85f5/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=196MiB/s][w=50.2k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=894486: Fri Apr  5 19:42:52 2024
  write: IOPS=51.6k, BW=201MiB/s (211MB/s)(11.8GiB/60001msec); 0 zone resets
    slat (usec): min=73, max=47096, avg=153.22, stdev=298.78
    clat (usec): min=2, max=50789, avg=1087.25, stdev=825.36
     lat (usec): min=112, max=51076, avg=1240.70, stdev=884.55
    clat percentiles (usec):
     |  1.00th=[  783],  5.00th=[  832], 10.00th=[  865], 20.00th=[  906],
     | 30.00th=[  947], 40.00th=[  979], 50.00th=[ 1020], 60.00th=[ 1057],
     | 70.00th=[ 1106], 80.00th=[ 1172], 90.00th=[ 1270], 95.00th=[ 1385],
     | 99.00th=[ 1729], 99.50th=[ 2147], 99.90th=[10552], 99.95th=[12125],
     | 99.99th=[44827]
   bw (  KiB/s): min=175864, max=237376, per=100.00%, avg=206243.45, stdev=1392.83, samples=952
   iops        : min=43966, max=59344, avg=51560.76, stdev=348.20, samples=952
  lat (usec)   : 4=0.01%, 20=0.01%, 250=0.01%, 500=0.01%, 750=0.14%
  lat (usec)   : 1000=44.15%
  lat (msec)   : 2=55.14%, 4=0.16%, 10=0.29%, 20=0.09%, 50=0.02%
  lat (msec)   : 100=0.01%
  cpu          : usr=1.63%, sys=24.06%, ctx=8870010, majf=0, minf=92
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,3093394,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=201MiB/s (211MB/s), 201MiB/s-201MiB/s (211MB/s-211MB/s), io=11.8GiB (12.7GB), run=60001-60001msec
#Koxia CM6 RaidZ1 4 namespaces sync standard

root@quantastor:/# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n2     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n3     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme0n4     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n1     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n2     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n3     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme1n4     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n1     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n2     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n3     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme2n4     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n1     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              1           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n2     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              2           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n3     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              3           0.00   B / 959.49  GB    512   B +  0 B   2.2.0
/dev/nvme3n4     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              4           0.00   B / 959.49  GB    512   B +  0 B   2.2.0

root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# zpool status -L
  pool: qs-1612333a-0e2f-ba76-d799-5b43a58e643b
 state: ONLINE
config:

        NAME                                     STATE     READ WRITE CKSUM
        qs-1612333a-0e2f-ba76-d799-5b43a58e643b  ONLINE       0     0     0
          raidz1-0                               ONLINE       0     0     0
            nvme0n1                              ONLINE       0     0     0
            nvme1n1                              ONLINE       0     0     0
            nvme2n1                              ONLINE       0     0     0
            nvme3n1                              ONLINE       0     0     0
          raidz1-1                               ONLINE       0     0     0
            nvme3n2                              ONLINE       0     0     0
            nvme1n2                              ONLINE       0     0     0
            nvme0n2                              ONLINE       0     0     0
            nvme2n2                              ONLINE       0     0     0
          raidz1-2                               ONLINE       0     0     0
            nvme2n3                              ONLINE       0     0     0
            nvme0n3                              ONLINE       0     0     0
            nvme1n3                              ONLINE       0     0     0
            nvme3n3                              ONLINE       0     0     0
          raidz1-3                               ONLINE       0     0     0
            nvme3n4                              ONLINE       0     0     0
            nvme0n4                              ONLINE       0     0     0
            nvme2n4                              ONLINE       0     0     0
            nvme1n4                              ONLINE       0     0     0

errors: No known data errors

root@quantastor:/# zpool status -v
  pool: qs-1612333a-0e2f-ba76-d799-5b43a58e643b
 state: ONLINE
config:

        NAME                                           STATE     READ WRITE CKSUM
        qs-1612333a-0e2f-ba76-d799-5b43a58e643b        ONLINE       0     0     0
          raidz1-0                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d201  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a201  ONLINE       0     0     0
          raidz1-1                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a202  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6802  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d202  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b802  ONLINE       0     0     0
          raidz1-2                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b803  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d203  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6803  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a203  ONLINE       0     0     0
          raidz1-3                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a204  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d204  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b804  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6804  ONLINE       0     0     0

errors: No known data errors

root@quantastor:/# zpool iostat -v
                                                 capacity     operations     bandwidth
pool                                           alloc   free   read  write   read  write
---------------------------------------------  -----  -----  -----  -----  -----  -----
qs-1612333a-0e2f-ba76-d799-5b43a58e643b        5.32M  13.9T      0     21  1.03K   228K
  raidz1-0                                     1.52M  3.48T      0     11    262  90.4K
    nvme-eui.00000000000000008ce38ee20af6d201      -      -      0      2     65  22.8K
    nvme-eui.00000000000000008ce38ee20d6b6801      -      -      0      2     65  22.6K
    nvme-eui.00000000000000008ce38ee20af7b801      -      -      0      2     65  22.6K
    nvme-eui.00000000000000008ce38ee20af7a201      -      -      0      2     65  22.4K
  raidz1-1                                      456K  3.48T      0      5    321  71.6K
    nvme-eui.00000000000000008ce38ee20af7a202      -      -      0      1     80  18.2K
    nvme-eui.00000000000000008ce38ee20d6b6802      -      -      0      1     80  18.2K
    nvme-eui.00000000000000008ce38ee20af6d202      -      -      0      1     80  17.6K
    nvme-eui.00000000000000008ce38ee20af7b802      -      -      0      1     80  17.5K
  raidz1-2                                     3.24M  3.48T      0      4    385  70.6K
    nvme-eui.00000000000000008ce38ee20af7b803      -      -      0      1     96  17.6K
    nvme-eui.00000000000000008ce38ee20af6d203      -      -      0      1     96  17.3K
    nvme-eui.00000000000000008ce38ee20d6b6803      -      -      0      1     96  17.8K
    nvme-eui.00000000000000008ce38ee20af7a203      -      -      0      1     96  17.8K
  raidz1-3                                      112K  3.48T      0      3    481  56.3K
    nvme-eui.00000000000000008ce38ee20af7a204      -      -      0      0    120  14.1K
    nvme-eui.00000000000000008ce38ee20af6d204      -      -      0      0    120  14.1K
    nvme-eui.00000000000000008ce38ee20af7b804      -      -      0      0    120  14.1K
    nvme-eui.00000000000000008ce38ee20d6b6804      -      -      0      0    120  14.1K
	
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# zfs get all qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe
NAME                                          PROPERTY              VALUE                                                            SOURCE
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  type                  filesystem                                                       -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  creation              Fri Apr  5 21:55 2024                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  used                  151K                                                             -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  available             9.97T                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  referenced            151K                                                             -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  compressratio         1.00x                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  mounted               yes                                                              -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  quota                 none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  reservation           none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  recordsize            128K                                                             local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  mountpoint            /mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  inherited from qs-1612333a-0e2f-ba76-d799-5b43a58e643b
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  sharenfs              off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  checksum              on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  compression           on                                                               inherited from qs-1612333a-0e2f-ba76-d799-5b43a58e643b
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  atime                 off                                                              local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  devices               on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  exec                  on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  setuid                on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  readonly              off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  zoned                 off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  snapdir               hidden                                                           default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  aclmode               discard                                                          default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  aclinherit            restricted                                                       default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  createtxg             87                                                               -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  canmount              on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  xattr                 sa                                                               local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  copies                1                                                                default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  version               5                                                                -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  utf8only              off                                                              -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  normalization         none                                                             -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  casesensitivity       sensitive                                                        -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  vscan                 off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  nbmand                off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  sharesmb              off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  refquota              none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  refreservation        none                                                             local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  guid                  6553702475503833972                                              -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  primarycache          metadata                                                         local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  secondarycache        metadata                                                         local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  usedbysnapshots       0B                                                               -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  usedbydataset         151K                                                             -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  usedbychildren        0B                                                               -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  usedbyrefreservation  0B                                                               -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  logbias               latency                                                          default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  objsetid              907                                                              -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  dedup                 off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  mlslabel              none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  sync                  standard                                                         local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  dnodesize             legacy                                                           default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  refcompressratio      1.00x                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  written               151K                                                             -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  logicalused           42.5K                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  logicalreferenced     42.5K                                                            -
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  volmode               default                                                          default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  filesystem_limit      none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  snapshot_limit        none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  filesystem_count      none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  snapshot_count        none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  snapdev               hidden                                                           default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  acltype               posix                                                            inherited from qs-1612333a-0e2f-ba76-d799-5b43a58e643b
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  context               none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  fscontext             none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  defcontext            none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  rootcontext           none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  relatime              off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  redundant_metadata    all                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  overlay               on                                                               default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  encryption            off                                                              default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  keylocation           none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  keyformat             none                                                             default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  pbkdf2iters           0                                                                default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  special_small_blocks  0                                                                default
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  quantastor:shareid    64c83783-7d77-114b-9d70-182ff010627b                             local
qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe  quantastor:name       RZ1-NS-4                                                         inherited from qs-1612333a-0e2f-ba76-d799-5b43a58e643b
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe#

root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
fiotest: Laying out IO file (1 file / 102400MiB)
Jobs: 8 (f=6): [m(1),f(1),m(4),f(1),m(1)][100.0%][r=3079MiB/s,w=3013MiB/s][r=3079,w=3013 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3389091: Fri Apr  5 22:03:29 2024
  read: IOPS=3153, BW=3153MiB/s (3306MB/s)(185GiB/60015msec)
    slat (usec): min=164, max=190741, avg=1810.00, stdev=7072.41
    clat (usec): min=3, max=241226, avg=8841.41, stdev=18900.04
     lat (usec): min=1076, max=242054, avg=10652.95, stdev=21175.43
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    5], 40.00th=[    5], 50.00th=[    6], 60.00th=[    6],
     | 70.00th=[    7], 80.00th=[    8], 90.00th=[   10], 95.00th=[   17],
     | 99.00th=[  133], 99.50th=[  159], 99.90th=[  178], 99.95th=[  184],
     | 99.99th=[  197]
   bw (  MiB/s): min= 1382, max= 5148, per=98.67%, avg=3111.21, stdev=105.52, samples=958
   iops        : min= 1380, max= 5147, avg=3109.50, stdev=105.53, samples=958
  write: IOPS=3152, BW=3152MiB/s (3305MB/s)(185GiB/60015msec); 0 zone resets
    slat (usec): min=194, max=200524, avg=707.57, stdev=4107.47
    clat (usec): min=3, max=241615, avg=8927.78, stdev=19063.66
     lat (usec): min=396, max=242436, avg=9636.89, stdev=19790.89
    clat percentiles (msec):
     |  1.00th=[    3],  5.00th=[    4], 10.00th=[    4], 20.00th=[    4],
     | 30.00th=[    5], 40.00th=[    5], 50.00th=[    6], 60.00th=[    6],
     | 70.00th=[    7], 80.00th=[    8], 90.00th=[   11], 95.00th=[   17],
     | 99.00th=[  136], 99.50th=[  159], 99.90th=[  178], 99.95th=[  184],
     | 99.99th=[  197]
   bw (  MiB/s): min= 1467, max= 5131, per=98.69%, avg=3110.92, stdev=107.63, samples=958
   iops        : min= 1465, max= 5129, avg=3109.23, stdev=107.62, samples=958
  lat (usec)   : 4=0.01%, 10=0.01%, 500=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=23.53%, 10=66.51%, 20=5.73%, 50=1.73%
  lat (msec)   : 100=0.93%, 250=1.56%
  cpu          : usr=2.58%, sys=27.98%, ctx=1522483, majf=0, minf=94
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=189236,189176,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=3153MiB/s (3306MB/s), 3153MiB/s-3153MiB/s (3306MB/s-3306MB/s), io=185GiB (198GB), run=60015-60015msec
  WRITE: bw=3152MiB/s (3305MB/s), 3152MiB/s-3152MiB/s (3305MB/s-3305MB/s), io=185GiB (198GB), run=60015-60015msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=8839MiB/s][r=8838 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3646485: Fri Apr  5 22:04:47 2024
  read: IOPS=9611, BW=9611MiB/s (10.1GB/s)(563GiB/60001msec)
    slat (usec): min=131, max=13809, avg=827.55, stdev=203.24
    clat (usec): min=3, max=19052, avg=5828.47, stdev=1064.03
     lat (usec): min=619, max=19992, avg=6656.86, stdev=1198.96
    clat percentiles (usec):
     |  1.00th=[ 3261],  5.00th=[ 3589], 10.00th=[ 4424], 20.00th=[ 5080],
     | 30.00th=[ 5407], 40.00th=[ 5669], 50.00th=[ 5932], 60.00th=[ 6194],
     | 70.00th=[ 6390], 80.00th=[ 6652], 90.00th=[ 7046], 95.00th=[ 7308],
     | 99.00th=[ 8094], 99.50th=[ 8586], 99.90th=[10421], 99.95th=[11731],
     | 99.99th=[15008]
   bw (  MiB/s): min= 7676, max=16612, per=100.00%, avg=9612.40, stdev=199.45, samples=958
   iops        : min= 7676, max=16612, avg=9612.12, stdev=199.47, samples=958
  lat (usec)   : 4=0.01%, 10=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=7.26%, 10=92.62%, 20=0.13%
  cpu          : usr=0.66%, sys=47.75%, ctx=963700, majf=0, minf=16475
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=576684,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=9611MiB/s (10.1GB/s), 9611MiB/s-9611MiB/s (10.1GB/s-10.1GB/s), io=563GiB (605GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=4158MiB/s][w=4158 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=429849: Fri Apr  5 22:05:59 2024
  write: IOPS=5643, BW=5644MiB/s (5918MB/s)(331GiB/60043msec); 0 zone resets
    slat (usec): min=193, max=623744, avg=1409.15, stdev=6126.13
    clat (usec): min=2, max=772759, avg=9921.26, stdev=21086.30
     lat (usec): min=659, max=774501, avg=11331.56, stdev=22933.48
    clat percentiles (usec):
     |  1.00th=[  1827],  5.00th=[  2212], 10.00th=[  2507], 20.00th=[  3228],
     | 30.00th=[  4490], 40.00th=[  6063], 50.00th=[  7963], 60.00th=[  9503],
     | 70.00th=[ 10683], 80.00th=[ 11994], 90.00th=[ 14615], 95.00th=[ 17957],
     | 99.00th=[ 53216], 99.50th=[122160], 99.90th=[354419], 99.95th=[471860],
     | 99.99th=[574620]
   bw (  MiB/s): min=   92, max=17877, per=99.04%, avg=5589.18, stdev=381.12, samples=956
   iops        : min=   89, max=17877, avg=5586.97, stdev=381.16, samples=956
  lat (usec)   : 4=0.01%, 10=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=2.49%, 4=24.33%, 10=37.25%, 20=32.34%, 50=2.54%
  lat (msec)   : 100=0.44%, 250=0.39%, 500=0.17%, 750=0.03%, 1000=0.01%
  cpu          : usr=3.90%, sys=21.40%, ctx=2865245, majf=0, minf=110
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,338854,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=5644MiB/s (5918MB/s), 5644MiB/s-5644MiB/s (5918MB/s-5918MB/s), io=331GiB (355GB), run=60043-60043msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=2): [f(4),m(1),f(2),m(1)][100.0%][r=2253MiB/s,w=2277MiB/s][r=18.0k,w=18.2k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=553905: Fri Apr  5 22:07:13 2024
  read: IOPS=17.6k, BW=2203MiB/s (2310MB/s)(129GiB/60001msec)
    slat (usec): min=14, max=188891, avg=392.10, stdev=2105.56
    clat (usec): min=2, max=204244, avg=1580.89, stdev=5221.32
     lat (usec): min=192, max=204449, avg=1973.71, stdev=6013.51
    clat percentiles (usec):
     |  1.00th=[   392],  5.00th=[   562], 10.00th=[   635], 20.00th=[   783],
     | 30.00th=[   906], 40.00th=[   996], 50.00th=[  1106], 60.00th=[  1237],
     | 70.00th=[  1418], 80.00th=[  1713], 90.00th=[  2180], 95.00th=[  2638],
     | 99.00th=[  5604], 99.50th=[ 14746], 99.90th=[ 81265], 99.95th=[137364],
     | 99.99th=[185598]
   bw (  MiB/s): min= 1171, max= 2984, per=99.12%, avg=2184.00, stdev=45.71, samples=954
   iops        : min= 9372, max=23871, avg=17470.30, stdev=365.73, samples=954
  write: IOPS=17.6k, BW=2206MiB/s (2313MB/s)(129GiB/60001msec); 0 zone resets
    slat (usec): min=18, max=185584, avg=53.13, stdev=709.27
    clat (usec): min=2, max=204818, avg=1598.05, stdev=5293.93
     lat (usec): min=28, max=204983, avg=1651.41, stdev=5378.73
    clat percentiles (usec):
     |  1.00th=[   396],  5.00th=[   570], 10.00th=[   635], 20.00th=[   783],
     | 30.00th=[   906], 40.00th=[   996], 50.00th=[  1123], 60.00th=[  1237],
     | 70.00th=[  1434], 80.00th=[  1729], 90.00th=[  2212], 95.00th=[  2671],
     | 99.00th=[  5997], 99.50th=[ 15139], 99.90th=[ 83362], 99.95th=[145753],
     | 99.99th=[187696]
   bw (  MiB/s): min= 1189, max= 2989, per=99.12%, avg=2186.35, stdev=46.70, samples=954
   iops        : min= 9513, max=23916, avg=17489.11, stdev=373.62, samples=954
  lat (usec)   : 4=0.01%, 50=0.01%, 100=0.01%, 250=0.36%, 500=3.25%
  lat (usec)   : 750=12.64%, 1000=23.88%
  lat (msec)   : 2=46.40%, 4=11.99%, 10=0.77%, 20=0.31%, 50=0.20%
  lat (msec)   : 100=0.12%, 250=0.08%
  cpu          : usr=2.12%, sys=19.18%, ctx=2076029, majf=0, minf=120
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1057651,1058748,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=2203MiB/s (2310MB/s), 2203MiB/s-2203MiB/s (2310MB/s-2310MB/s), io=129GiB (139GB), run=60001-60001msec
  WRITE: bw=2206MiB/s (2313MB/s), 2206MiB/s-2206MiB/s (2313MB/s-2313MB/s), io=129GiB (139GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=4927MiB/s][r=39.4k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=966947: Fri Apr  5 22:08:31 2024
  read: IOPS=38.6k, BW=4825MiB/s (5059MB/s)(283GiB/60002msec)
    slat (usec): min=10, max=9052, avg=205.53, stdev=27.13
    clat (usec): min=2, max=11979, avg=1451.74, stdev=75.59
     lat (usec): min=185, max=12179, avg=1657.51, stdev=81.49
    clat percentiles (usec):
     |  1.00th=[ 1319],  5.00th=[ 1385], 10.00th=[ 1401], 20.00th=[ 1418],
     | 30.00th=[ 1434], 40.00th=[ 1434], 50.00th=[ 1450], 60.00th=[ 1450],
     | 70.00th=[ 1467], 80.00th=[ 1483], 90.00th=[ 1500], 95.00th=[ 1516],
     | 99.00th=[ 1614], 99.50th=[ 1663], 99.90th=[ 2147], 99.95th=[ 2704],
     | 99.99th=[ 3425]
   bw (  MiB/s): min= 4730, max= 5149, per=99.99%, avg=4824.48, stdev= 7.52, samples=954
   iops        : min=37840, max=41198, avg=38595.68, stdev=60.18, samples=954
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02%
  lat (msec)   : 2=99.85%, 4=0.12%, 10=0.01%, 20=0.01%
  cpu          : usr=1.00%, sys=16.59%, ctx=2325012, majf=0, minf=2126
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2316025,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=4825MiB/s (5059MB/s), 4825MiB/s-4825MiB/s (5059MB/s-5059MB/s), io=283GiB (304GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=0): [f(8)][100.0%][w=4908MiB/s][w=39.3k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=997625: Fri Apr  5 22:09:46 2024
  write: IOPS=45.5k, BW=5686MiB/s (5962MB/s)(333GiB/60001msec); 0 zone resets
    slat (usec): min=15, max=441556, avg=172.31, stdev=1486.03
    clat (usec): min=2, max=607871, avg=1233.02, stdev=4947.07
     lat (usec): min=158, max=608024, avg=1405.83, stdev=5412.68
    clat percentiles (usec):
     |  1.00th=[   190],  5.00th=[   210], 10.00th=[   247], 20.00th=[   383],
     | 30.00th=[   635], 40.00th=[   873], 50.00th=[  1057], 60.00th=[  1188],
     | 70.00th=[  1287], 80.00th=[  1450], 90.00th=[  1713], 95.00th=[  1958],
     | 99.00th=[  3294], 99.50th=[  8979], 99.90th=[ 60556], 99.95th=[101188],
     | 99.99th=[214959]
   bw (  MiB/s): min=  121, max=19504, per=99.25%, avg=5643.13, stdev=377.94, samples=952
   iops        : min=  968, max=156036, avg=45143.40, stdev=3023.50, samples=952
  lat (usec)   : 4=0.01%, 250=10.39%, 500=14.22%, 750=10.62%, 1000=10.42%
  lat (msec)   : 2=49.76%, 4=3.76%, 10=0.35%, 20=0.18%, 50=0.16%
  lat (msec)   : 100=0.08%, 250=0.04%, 500=0.01%, 750=0.01%
  cpu          : usr=2.64%, sys=18.88%, ctx=3264930, majf=0, minf=108
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2729256,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=5686MiB/s (5962MB/s), 5686MiB/s-5686MiB/s (5962MB/s-5962MB/s), io=333GiB (358GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=36.9MiB/s,w=36.3MiB/s][r=9455,w=9288 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1203203: Fri Apr  5 22:11:02 2024
  read: IOPS=9892, BW=38.6MiB/s (40.5MB/s)(2318MiB/60001msec)
    slat (usec): min=3, max=193838, avg=275.76, stdev=1188.82
    clat (usec): min=2, max=210175, avg=2839.18, stdev=5851.06
     lat (usec): min=214, max=210344, avg=3115.37, stdev=6179.68
    clat percentiles (usec):
     |  1.00th=[  1500],  5.00th=[  1680], 10.00th=[  1778], 20.00th=[  1909],
     | 30.00th=[  2024], 40.00th=[  2114], 50.00th=[  2212], 60.00th=[  2311],
     | 70.00th=[  2474], 80.00th=[  2737], 90.00th=[  3916], 95.00th=[  4817],
     | 99.00th=[  8225], 99.50th=[ 16712], 99.90th=[104334], 99.95th=[152044],
     | 99.99th=[198181]
   bw (  KiB/s): min=29248, max=48852, per=99.39%, avg=39327.62, stdev=512.37, samples=952
   iops        : min= 7312, max=12212, avg=9831.16, stdev=128.06, samples=952
  write: IOPS=9886, BW=38.6MiB/s (40.5MB/s)(2317MiB/60001msec); 0 zone resets
    slat (usec): min=6, max=202184, avg=524.16, stdev=1879.06
    clat (usec): min=2, max=204265, avg=2827.73, stdev=5709.22
     lat (usec): min=351, max=210384, avg=3352.45, stdev=6371.73
    clat percentiles (usec):
     |  1.00th=[  1516],  5.00th=[  1680], 10.00th=[  1778], 20.00th=[  1926],
     | 30.00th=[  2024], 40.00th=[  2114], 50.00th=[  2212], 60.00th=[  2343],
     | 70.00th=[  2474], 80.00th=[  2737], 90.00th=[  3916], 95.00th=[  4817],
     | 99.00th=[  8029], 99.50th=[ 16319], 99.90th=[100140], 99.95th=[149947],
     | 99.99th=[196084]
   bw (  KiB/s): min=30056, max=49532, per=99.37%, avg=39293.36, stdev=504.59, samples=952
   iops        : min= 7514, max=12383, avg=9822.50, stdev=126.11, samples=952
  lat (usec)   : 4=0.01%, 10=0.01%, 250=0.01%, 500=0.01%, 750=0.01%
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=27.89%, 4=62.70%, 10=8.65%, 20=0.34%, 50=0.20%
  lat (msec)   : 100=0.12%, 250=0.10%
  cpu          : usr=0.99%, sys=14.91%, ctx=2279456, majf=0, minf=93
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=593533,593173,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=2318MiB (2431MB), run=60001-60001msec
  WRITE: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=2317MiB (2430MB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=176MiB/s][r=45.0k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1986101: Fri Apr  5 22:12:06 2024
  read: IOPS=46.9k, BW=183MiB/s (192MB/s)(10.7GiB/60001msec)
    slat (usec): min=2, max=24679, avg=168.33, stdev=55.13
    clat (usec): min=2, max=72819, avg=1194.26, stdev=250.65
     lat (usec): min=25, max=73201, avg=1362.86, stdev=279.35
    clat percentiles (usec):
     |  1.00th=[  734],  5.00th=[  873], 10.00th=[  906], 20.00th=[  971],
     | 30.00th=[ 1057], 40.00th=[ 1188], 50.00th=[ 1237], 60.00th=[ 1287],
     | 70.00th=[ 1319], 80.00th=[ 1352], 90.00th=[ 1418], 95.00th=[ 1483],
     | 99.00th=[ 1696], 99.50th=[ 1827], 99.90th=[ 2704], 99.95th=[ 3228],
     | 99.99th=[ 4817]
   bw (  KiB/s): min=159904, max=233136, per=100.00%, avg=187772.08, stdev=1301.30, samples=952
   iops        : min=39978, max=58284, avg=46942.92, stdev=325.32, samples=952
  lat (usec)   : 4=0.01%, 50=0.10%, 100=0.01%, 250=0.04%, 500=0.16%
  lat (usec)   : 750=0.76%, 1000=23.11%
  lat (msec)   : 2=75.55%, 4=0.25%, 10=0.02%, 20=0.01%, 50=0.01%
  lat (msec)   : 100=0.01%
  cpu          : usr=1.59%, sys=19.18%, ctx=2941319, majf=0, minf=159
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2816258,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=183MiB/s (192MB/s), 183MiB/s-183MiB/s (192MB/s-192MB/s), io=10.7GiB (11.5GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1107MiB/s][w=283k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3272866: Fri Apr  5 22:13:22 2024
  write: IOPS=288k, BW=1127MiB/s (1182MB/s)(66.0GiB/60002msec); 0 zone resets
    slat (usec): min=3, max=47774, avg=26.44, stdev=144.65
    clat (nsec): min=1893, max=47875k, avg=194955.41, stdev=388221.04
     lat (usec): min=10, max=47885, avg=221.52, stdev=414.87
    clat percentiles (usec):
     |  1.00th=[   58],  5.00th=[   78], 10.00th=[   91], 20.00th=[  109],
     | 30.00th=[  120], 40.00th=[  133], 50.00th=[  149], 60.00th=[  167],
     | 70.00th=[  190], 80.00th=[  227], 90.00th=[  383], 95.00th=[  449],
     | 99.00th=[  523], 99.50th=[  570], 99.90th=[ 4555], 99.95th=[ 7570],
     | 99.99th=[22414]
   bw (  MiB/s): min= 1068, max= 1190, per=100.00%, avg=1126.90, stdev= 3.41, samples=953
   iops        : min=273622, max=304716, avg=288487.16, stdev=873.66, samples=953
  lat (usec)   : 2=0.01%, 4=0.01%, 20=0.01%, 50=0.50%, 100=14.08%
  lat (usec)   : 250=68.36%, 500=15.47%, 750=1.34%, 1000=0.06%
  lat (msec)   : 2=0.06%, 4=0.02%, 10=0.08%, 20=0.02%, 50=0.01%
  cpu          : usr=5.46%, sys=58.29%, ctx=7594536, majf=0, minf=94
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,17309095,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1127MiB/s (1182MB/s), 1127MiB/s-1127MiB/s (1182MB/s-1182MB/s), io=66.0GiB (70.9GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe#
#Koxia CM6 Raid-10 sync always

root@quantastor:/# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     Y1X0A01PTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         844.35  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme1n1     22D0A13MTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         836.78  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme2n1     Y1X0A035TCE8         Dell Ent NVMe CM6 RI 3.84TB              1         851.14  GB /   3.84  TB    512   B +  0 B   2.2.0
/dev/nvme3n1     Y1X0A02RTCE8         Dell Ent NVMe CM6 RI 3.84TB              1         842.81  GB /   3.84  TB    512   B +  0 B   2.2.0

  pool: qs-6b9a100a-c7ea-c861-4875-16db1ba3acef
 state: ONLINE
config:

        NAME                                           STATE     READ WRITE CKSUM
        qs-6b9a100a-c7ea-c861-4875-16db1ba3acef        ONLINE       0     0     0
          mirror-0                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af6d201  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20d6b6801  ONLINE       0     0     0
          mirror-1                                     ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7b801  ONLINE       0     0     0
            nvme-eui.00000000000000008ce38ee20af7a201  ONLINE       0     0     0

errors: No known data errors

root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# zfs get all qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe
NAME                                          PROPERTY              VALUE                                                            SOURCE
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  type                  filesystem                                                       -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  creation              Fri Apr  5 19:48 2024                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  used                  97.7G                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  available             6.70T                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  referenced            97.7G                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  compressratio         1.02x                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  mounted               yes                                                              -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  quota                 none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  reservation           none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  recordsize            128K                                                             local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  mountpoint            /mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  inherited from qs-6b9a100a-c7ea-c861-4875-16db1ba3acef
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  sharenfs              off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  checksum              on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  compression           on                                                               inherited from qs-6b9a100a-c7ea-c861-4875-16db1ba3acef
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  atime                 off                                                              local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  devices               on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  exec                  on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  setuid                on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  readonly              off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  zoned                 off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  snapdir               hidden                                                           default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  aclmode               discard                                                          default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  aclinherit            restricted                                                       default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  createtxg             40                                                               -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  canmount              on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  xattr                 sa                                                               local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  copies                1                                                                default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  version               5                                                                -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  utf8only              off                                                              -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  normalization         none                                                             -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  casesensitivity       sensitive                                                        -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  vscan                 off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  nbmand                off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  sharesmb              off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  refquota              none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  refreservation        none                                                             local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  guid                  3760159109045432288                                              -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  primarycache          metadata                                                         local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  secondarycache        metadata                                                         local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  usedbysnapshots       0B                                                               -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  usedbydataset         97.7G                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  usedbychildren        0B                                                               -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  usedbyrefreservation  0B                                                               -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  logbias               latency                                                          default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  objsetid              68                                                               -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  dedup                 off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  mlslabel              none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  sync                  always                                                           local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  dnodesize             legacy                                                           default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  refcompressratio      1.02x                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  written               97.7G                                                            -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  logicalused           100G                                                             -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  logicalreferenced     100G                                                             -
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  volmode               default                                                          default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  filesystem_limit      none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  snapshot_limit        none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  filesystem_count      none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  snapshot_count        none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  snapdev               hidden                                                           default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  acltype               posix                                                            inherited from qs-6b9a100a-c7ea-c861-4875-16db1ba3acef
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  context               none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  fscontext             none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  defcontext            none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  rootcontext           none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  relatime              off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  redundant_metadata    all                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  overlay               on                                                               default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  encryption            off                                                              default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  keylocation           none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  keyformat             none                                                             default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  pbkdf2iters           0                                                                default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  special_small_blocks  0                                                                default
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  quantastor:shareid    c5771906-907a-7701-feb8-a26c8caf1adf                             local
qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe  quantastor:name       R-10                                                             inherited from qs-6b9a100a-c7ea-c861-4875-16db1ba3acef
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe#

root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=1498MiB/s,w=1469MiB/s][r=1498,w=1469 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=261092: Fri Apr  5 20:07:38 2024
  read: IOPS=1474, BW=1474MiB/s (1546MB/s)(86.4GiB/60003msec)
    slat (usec): min=212, max=19129, avg=2415.10, stdev=1152.38
    clat (usec): min=4, max=43327, avg=19148.42, stdev=3178.25
     lat (usec): min=3735, max=48036, avg=21564.43, stdev=3489.65
    clat percentiles (usec):
     |  1.00th=[12911],  5.00th=[14484], 10.00th=[15401], 20.00th=[16450],
     | 30.00th=[17433], 40.00th=[17957], 50.00th=[18744], 60.00th=[19530],
     | 70.00th=[20579], 80.00th=[21627], 90.00th=[23462], 95.00th=[24773],
     | 99.00th=[27657], 99.50th=[28967], 99.90th=[31589], 99.95th=[32900],
     | 99.99th=[36439]
   bw (  MiB/s): min= 1157, max= 1789, per=99.95%, avg=1473.46, stdev=15.22, samples=960
   iops        : min= 1156, max= 1789, avg=1472.93, stdev=15.24, samples=960
  write: IOPS=1475, BW=1476MiB/s (1547MB/s)(86.5GiB/60003msec); 0 zone resets
    slat (usec): min=929, max=18452, avg=2996.98, stdev=946.36
    clat (usec): min=3, max=41498, avg=18818.86, stdev=3235.17
     lat (usec): min=3104, max=45627, avg=21816.77, stdev=3472.29
    clat percentiles (usec):
     |  1.00th=[12649],  5.00th=[14091], 10.00th=[15008], 20.00th=[16057],
     | 30.00th=[16909], 40.00th=[17695], 50.00th=[18482], 60.00th=[19268],
     | 70.00th=[20317], 80.00th=[21365], 90.00th=[23200], 95.00th=[24511],
     | 99.00th=[27657], 99.50th=[28967], 99.90th=[31851], 99.95th=[33162],
     | 99.99th=[35914]
   bw (  MiB/s): min= 1191, max= 1749, per=99.95%, avg=1474.74, stdev=13.64, samples=960
   iops        : min= 1191, max= 1749, avg=1474.21, stdev=13.65, samples=960
  lat (usec)   : 4=0.01%, 10=0.01%
  lat (msec)   : 4=0.01%, 10=0.02%, 20=65.86%, 50=34.11%
  cpu          : usr=1.35%, sys=14.27%, ctx=706844, majf=0, minf=92
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=88457,88536,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=1474MiB/s (1546MB/s), 1474MiB/s-1474MiB/s (1546MB/s-1546MB/s), io=86.4GiB (92.8GB), run=60003-60003msec
  WRITE: bw=1476MiB/s (1547MB/s), 1476MiB/s-1476MiB/s (1547MB/s-1547MB/s), io=86.5GiB (92.8GB), run=60003-60003msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=11.5GiB/s][r=11.8k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=627036: Fri Apr  5 20:08:52 2024
  read: IOPS=12.4k, BW=12.1GiB/s (13.0GB/s)(727GiB/60002msec)
    slat (usec): min=134, max=13526, avg=640.20, stdev=139.39
    clat (usec): min=3, max=18407, avg=4513.28, stdev=647.16
     lat (usec): min=524, max=19315, avg=5154.23, stdev=727.31
    clat percentiles (usec):
     |  1.00th=[ 3326],  5.00th=[ 3556], 10.00th=[ 3720], 20.00th=[ 3949],
     | 30.00th=[ 4146], 40.00th=[ 4293], 50.00th=[ 4490], 60.00th=[ 4621],
     | 70.00th=[ 4817], 80.00th=[ 5014], 90.00th=[ 5342], 95.00th=[ 5669],
     | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7439], 99.95th=[ 7898],
     | 99.99th=[ 9110]
   bw (  MiB/s): min= 9796, max=16436, per=100.00%, avg=12415.80, stdev=175.43, samples=953
   iops        : min= 9796, max=16436, avg=12415.59, stdev=175.42, samples=953
  lat (usec)   : 4=0.01%, 10=0.01%, 750=0.01%
  lat (msec)   : 2=0.01%, 4=21.88%, 10=78.11%, 20=0.01%
  cpu          : usr=0.89%, sys=46.48%, ctx=1083082, majf=0, minf=16473
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=744773,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=12.1GiB/s (13.0GB/s), 12.1GiB/s-12.1GiB/s (13.0GB/s-13.0GB/s), io=727GiB (781GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1624MiB/s][w=1624 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1377072: Fri Apr  5 20:10:01 2024
  write: IOPS=1650, BW=1651MiB/s (1731MB/s)(96.7GiB/60004msec); 0 zone resets
    slat (usec): min=1159, max=25177, avg=4841.26, stdev=435.21
    clat (usec): min=3, max=53261, avg=33917.71, stdev=2409.17
     lat (usec): min=4942, max=57930, avg=38759.78, stdev=2719.63
    clat percentiles (usec):
     |  1.00th=[24511],  5.00th=[30278], 10.00th=[31589], 20.00th=[33162],
     | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341],
     | 70.00th=[34866], 80.00th=[34866], 90.00th=[35390], 95.00th=[35914],
     | 99.00th=[38011], 99.50th=[39584], 99.90th=[42730], 99.95th=[44303],
     | 99.99th=[50070]
   bw (  MiB/s): min= 1552, max= 2536, per=99.93%, avg=1649.74, stdev=13.04, samples=960
   iops        : min= 1552, max= 2536, avg=1649.47, stdev=13.04, samples=960
  lat (usec)   : 4=0.01%, 10=0.01%
  lat (msec)   : 10=0.02%, 20=0.87%, 50=99.10%, 100=0.01%
  cpu          : usr=1.51%, sys=7.91%, ctx=341664, majf=0, minf=78
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=99.9%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,99060,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1651MiB/s (1731MB/s), 1651MiB/s-1651MiB/s (1731MB/s-1731MB/s), io=96.7GiB (104GB), run=60004-60004msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=890MiB/s,w=878MiB/s][r=7123,w=7023 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1620871: Fri Apr  5 20:11:10 2024
  read: IOPS=7172, BW=897MiB/s (940MB/s)(52.5GiB/60001msec)
    slat (usec): min=17, max=12281, avg=591.90, stdev=188.78
    clat (usec): min=2, max=16036, avg=3904.79, stdev=590.65
     lat (usec): min=520, max=16782, avg=4497.14, stdev=659.20
    clat percentiles (usec):
     |  1.00th=[ 3130],  5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3523],
     | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3884],
     | 70.00th=[ 3982], 80.00th=[ 4146], 90.00th=[ 4424], 95.00th=[ 4817],
     | 99.00th=[ 6456], 99.50th=[ 7111], 99.90th=[ 8586], 99.95th=[ 9503],
     | 99.99th=[11338]
   bw (  KiB/s): min=810651, max=1023745, per=99.98%, avg=917937.38, stdev=5219.31, samples=954
   iops        : min= 6332, max= 7998, avg=7171.17, stdev=40.79, samples=954
  write: IOPS=7165, BW=896MiB/s (939MB/s)(52.5GiB/60001msec); 0 zone resets
    slat (usec): min=301, max=7525, avg=516.75, stdev=133.20
    clat (usec): min=3, max=16100, avg=3909.15, stdev=579.58
     lat (usec): min=490, max=16787, avg=4426.34, stdev=629.77
    clat percentiles (usec):
     |  1.00th=[ 3130],  5.00th=[ 3326], 10.00th=[ 3425], 20.00th=[ 3523],
     | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 3884],
     | 70.00th=[ 4015], 80.00th=[ 4146], 90.00th=[ 4424], 95.00th=[ 4817],
     | 99.00th=[ 6456], 99.50th=[ 7046], 99.90th=[ 8455], 99.95th=[ 9241],
     | 99.99th=[11207]
   bw (  KiB/s): min=810864, max=1021435, per=99.99%, avg=917125.87, stdev=5348.01, samples=954
   iops        : min= 6334, max= 7979, avg=7164.84, stdev=41.79, samples=954
  lat (usec)   : 4=0.01%, 10=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=69.98%, 10=29.99%, 20=0.03%
  cpu          : usr=1.16%, sys=10.77%, ctx=1931935, majf=0, minf=95
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=430370,429958,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=897MiB/s (940MB/s), 897MiB/s-897MiB/s (940MB/s-940MB/s), io=52.5GiB (56.4GB), run=60001-60001msec
  WRITE: bw=896MiB/s (939MB/s), 896MiB/s-896MiB/s (939MB/s-939MB/s), io=52.5GiB (56.4GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=4453MiB/s][r=35.6k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1886056: Fri Apr  5 20:12:19 2024
  read: IOPS=35.6k, BW=4447MiB/s (4664MB/s)(261GiB/60002msec)
    slat (usec): min=11, max=2207, avg=223.09, stdev=20.78
    clat (usec): min=2, max=3661, avg=1574.84, stdev=59.78
     lat (usec): min=206, max=3899, avg=1798.17, stdev=64.66
    clat percentiles (usec):
     |  1.00th=[ 1434],  5.00th=[ 1483], 10.00th=[ 1516], 20.00th=[ 1532],
     | 30.00th=[ 1549], 40.00th=[ 1565], 50.00th=[ 1582], 60.00th=[ 1582],
     | 70.00th=[ 1598], 80.00th=[ 1614], 90.00th=[ 1631], 95.00th=[ 1647],
     | 99.00th=[ 1696], 99.50th=[ 1713], 99.90th=[ 1893], 99.95th=[ 2147],
     | 99.99th=[ 2999]
   bw (  MiB/s): min= 4336, max= 4586, per=99.97%, avg=4446.30, stdev= 6.77, samples=955
   iops        : min=34688, max=36688, avg=35570.16, stdev=54.12, samples=955
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=99.92%, 4=0.08%
  cpu          : usr=0.86%, sys=16.41%, ctx=2137378, majf=0, minf=2141
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2134864,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=4447MiB/s (4664MB/s), 4447MiB/s-4447MiB/s (4664MB/s-4664MB/s), io=261GiB (280GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=1680MiB/s][w=13.4k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=1886667: Fri Apr  5 20:13:35 2024
  write: IOPS=11.6k, BW=1448MiB/s (1518MB/s)(84.8GiB/60007msec); 0 zone resets
    slat (usec): min=374, max=12497, avg=688.16, stdev=215.74
    clat (usec): min=3, max=28312, avg=4836.03, stdev=957.34
     lat (usec): min=3840, max=29095, avg=5524.56, stdev=1074.29
    clat percentiles (usec):
     |  1.00th=[ 3982],  5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4113],
     | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4424], 60.00th=[ 4621],
     | 70.00th=[ 5669], 80.00th=[ 5932], 90.00th=[ 6063], 95.00th=[ 6063],
     | 99.00th=[ 6390], 99.50th=[ 7177], 99.90th=[13829], 99.95th=[14877],
     | 99.99th=[17171]
   bw (  MiB/s): min= 1147, max= 1721, per=99.99%, avg=1447.61, stdev=20.07, samples=960
   iops        : min= 9182, max=13774, avg=11580.58, stdev=160.49, samples=960
  lat (usec)   : 4=0.01%, 10=0.01%
  lat (msec)   : 4=1.83%, 10=97.90%, 20=0.26%, 50=0.01%
  cpu          : usr=0.82%, sys=7.82%, ctx=1489224, majf=0, minf=101
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,695014,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=1448MiB/s (1518MB/s), 1448MiB/s-1448MiB/s (1518MB/s-1518MB/s), io=84.8GiB (91.1GB), run=60007-60007msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [m(8)][100.0%][r=32.1MiB/s,w=31.7MiB/s][r=8227,w=8121 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2153046: Fri Apr  5 20:14:58 2024
  read: IOPS=8448, BW=33.0MiB/s (34.6MB/s)(1980MiB/60001msec)
    slat (usec): min=4, max=179347, avg=314.73, stdev=1063.48
    clat (usec): min=2, max=194939, avg=3312.56, stdev=5326.30
     lat (usec): min=214, max=195135, avg=3627.68, stdev=5573.00
    clat percentiles (usec):
     |  1.00th=[  1811],  5.00th=[  2040], 10.00th=[  2180], 20.00th=[  2343],
     | 30.00th=[  2474], 40.00th=[  2606], 50.00th=[  2704], 60.00th=[  2835],
     | 70.00th=[  2999], 80.00th=[  3228], 90.00th=[  4080], 95.00th=[  6194],
     | 99.00th=[ 10421], 99.50th=[ 15139], 99.90th=[ 94897], 99.95th=[143655],
     | 99.99th=[179307]
   bw (  KiB/s): min=24866, max=43921, per=99.52%, avg=33632.33, stdev=531.74, samples=952
   iops        : min= 6216, max=10980, avg=8407.69, stdev=132.93, samples=952
  write: IOPS=8441, BW=32.0MiB/s (34.6MB/s)(1978MiB/60001msec); 0 zone resets
    slat (usec): min=65, max=192179, avg=624.67, stdev=2038.62
    clat (usec): min=2, max=194829, avg=3322.17, stdev=5367.54
     lat (usec): min=540, max=195417, avg=3947.36, stdev=5981.16
    clat percentiles (usec):
     |  1.00th=[  1827],  5.00th=[  2057], 10.00th=[  2180], 20.00th=[  2343],
     | 30.00th=[  2474], 40.00th=[  2606], 50.00th=[  2704], 60.00th=[  2835],
     | 70.00th=[  2999], 80.00th=[  3228], 90.00th=[  4047], 95.00th=[  6194],
     | 99.00th=[ 10421], 99.50th=[ 15533], 99.90th=[ 95945], 99.95th=[143655],
     | 99.99th=[179307]
   bw (  KiB/s): min=25617, max=43896, per=99.53%, avg=33605.72, stdev=512.89, samples=952
   iops        : min= 6404, max=10974, avg=8401.03, stdev=128.21, samples=952
  lat (usec)   : 4=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=3.70%, 4=86.08%, 10=9.11%, 20=0.73%, 50=0.18%
  lat (msec)   : 100=0.09%, 250=0.10%
  cpu          : usr=0.84%, sys=13.27%, ctx=2949476, majf=0, minf=101
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=506928,506477,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=1980MiB (2076MB), run=60001-60001msec
  WRITE: bw=32.0MiB/s (34.6MB/s), 32.0MiB/s-32.0MiB/s (34.6MB/s-34.6MB/s), io=1978MiB (2075MB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=206MiB/s][r=52.6k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=2549639: Fri Apr  5 20:16:56 2024
  read: IOPS=51.8k, BW=202MiB/s (212MB/s)(11.9GiB/60001msec)
    slat (usec): min=2, max=12944, avg=152.49, stdev=41.11
    clat (usec): min=2, max=14155, avg=1081.50, stdev=157.10
     lat (usec): min=32, max=14324, avg=1234.21, stdev=173.49
    clat percentiles (usec):
     |  1.00th=[  725],  5.00th=[  914], 10.00th=[  938], 20.00th=[  971],
     | 30.00th=[ 1004], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1090],
     | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[ 1270], 95.00th=[ 1352],
     | 99.00th=[ 1483], 99.50th=[ 1532], 99.90th=[ 1745], 99.95th=[ 2212],
     | 99.99th=[ 3032]
   bw (  KiB/s): min=178608, max=234672, per=99.97%, avg=207257.51, stdev=1222.99, samples=952
   iops        : min=44652, max=58668, avg=51814.24, stdev=305.75, samples=952
  lat (usec)   : 4=0.01%, 50=0.08%, 100=0.01%, 250=0.03%, 500=0.13%
  lat (usec)   : 750=0.91%, 1000=29.02%
  lat (msec)   : 2=69.78%, 4=0.06%, 10=0.01%, 20=0.01%
  cpu          : usr=1.44%, sys=16.91%, ctx=3163852, majf=0, minf=157
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=3109821,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
   READ: bw=202MiB/s (212MB/s), 202MiB/s-202MiB/s (212MB/s-212MB/s), io=11.9GiB (12.7GB), run=60001-60001msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe# fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
fiotest: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=8
...
fio-3.16
Starting 8 processes
Jobs: 8 (f=8): [W(8)][100.0%][w=220MiB/s][w=56.3k IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=3186350: Fri Apr  5 20:18:07 2024
  write: IOPS=53.6k, BW=209MiB/s (220MB/s)(12.3GiB/60002msec); 0 zone resets
    slat (usec): min=60, max=59223, avg=147.42, stdev=291.36
    clat (usec): min=2, max=60549, avg=1045.61, stdev=816.07
     lat (usec): min=78, max=60788, avg=1193.26, stdev=874.85
    clat percentiles (usec):
     |  1.00th=[  717],  5.00th=[  783], 10.00th=[  832], 20.00th=[  881],
     | 30.00th=[  922], 40.00th=[  955], 50.00th=[  996], 60.00th=[ 1037],
     | 70.00th=[ 1074], 80.00th=[ 1139], 90.00th=[ 1237], 95.00th=[ 1336],
     | 99.00th=[ 1614], 99.50th=[ 1827], 99.90th=[ 9372], 99.95th=[11863],
     | 99.99th=[48497]
   bw (  KiB/s): min=182856, max=244592, per=99.99%, avg=214395.29, stdev=1256.31, samples=958
   iops        : min=45714, max=61148, avg=53598.69, stdev=314.07, samples=958
  lat (usec)   : 4=0.01%, 100=0.01%, 250=0.01%, 500=0.01%, 750=2.38%
  lat (usec)   : 1000=48.91%
  lat (msec)   : 2=48.32%, 4=0.11%, 10=0.19%, 20=0.06%, 50=0.01%
  lat (msec)   : 100=0.01%
  cpu          : usr=1.58%, sys=26.05%, ctx=9675464, majf=0, minf=94
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,3216517,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=8

Run status group 0 (all jobs):
  WRITE: bw=209MiB/s (220MB/s), 209MiB/s-209MiB/s (220MB/s-220MB/s), io=12.3GiB (13.2GB), run=60002-60002msec
root@quantastor:/mnt/storage-pools/qs-6b9a100a-c7ea-c861-4875-16db1ba3acef/NVMe#
#bash history from testing/screwing around

 226  2024-04-05 20:22:20 nvme list
  227  2024-04-05 20:22:27 cd /
  228  2024-04-05 20:22:28 nvme list
  229  2024-04-05 20:23:49 clear
  230  2024-04-05 20:25:28 nvme id-ctrl /dev/nvme0 | grep ^nn
  231  2024-04-05 20:25:46 # nvme id-ns /dev/nvme0n1 | grep "in use"
  232  2024-04-05 20:26:30 nvme id-ctrl /dev/nvme0 | grep ^cntlid
  233  2024-04-05 20:27:07 nvme delete-ns /dev/nvme0 -n 1
  234  2024-04-05 20:27:49 nvme list
  235  2024-04-05 20:28:21 nvme id-ctrl /dev/nvme0
  236  2024-04-05 20:32:02 nvme id-ns -H /dev/nvme0 | grep "LBA Format"
  237  2024-04-05 20:32:18 nvme id-ns -H /dev/nvme1n0 | grep "LBA Format"
  238  2024-04-05 20:32:25 nvme id-ns -H /dev/nvme1n1 | grep "LBA Format"
  239  2024-04-05 20:32:50 nvme id-ns -H /dev/nvme1n1
  240  2024-04-05 20:39:58 nvme create-ns /dev/nvme0 --nsze=1875369132 --ncap=1875369132 --flbas=0 -dps=0
  241  2024-04-05 20:40:34 nvme delete-ns /dev/nvme0 -n 3
  242  2024-04-05 20:40:37 nvme delete-ns /dev/nvme0 -n 2
  243  2024-04-05 20:40:39 nvme delete-ns /dev/nvme0 -n 1
  244  2024-04-05 20:41:27 nvme create-ns /dev/nvme0 --nsze=1875000000 --ncap=1875000000 --flbas=0 -dps=0
  245  2024-04-05 20:41:38 nvme delete-ns /dev/nvme0 -n 3
  246  2024-04-05 20:41:41 nvme delete-ns /dev/nvme0 -n 2
  247  2024-04-05 20:41:43 nvme delete-ns /dev/nvme0 -n 1
  248  2024-04-05 20:45:45 nvme create-ns /dev/nvme0 --nsze=234420000 --ncap=234420000 --flbas=0 -dps=0
  249  2024-04-05 20:46:04 nvme attach-ns /dev/nvme0 --namespace-id=1 -controllers=0x1
  250  2024-04-05 20:46:11 nvme attach-ns /dev/nvme0 --namespace-id=2 -controllers=0x1
  251  2024-04-05 20:46:17 nvme attach-ns /dev/nvme0 --namespace-id=3 -controllers=0x1
  252  2024-04-05 20:46:23 nvme attach-ns /dev/nvme0 --namespace-id=4 -controllers=0x1
  253  2024-04-05 20:46:27 ncme list
  254  2024-04-05 20:46:31 nvme list
  255  2024-04-05 20:48:44 nvme delete-ns /dev/nvme0 -n 4
  256  2024-04-05 20:48:47 nvme delete-ns /dev/nvme0 -n 3
  257  2024-04-05 20:48:52 nvme delete-ns /dev/nvme0 -n 2
  258  2024-04-05 20:48:55 nvme delete-ns /dev/nvme0 -n 1
  259  2024-04-05 20:49:21 nvme create-ns /dev/nvme0 --nsze=1875000000 --ncap=1875000000 --flbas=0 -dps=0
  260  2024-04-05 20:49:41 nvme list
  261  2024-04-05 20:50:04 nvme id-ns -H /dev/nvme0n1
  262  2024-04-05 20:50:09 nvme id-ctrl /dev/nvme0
  263  2024-04-05 20:53:41 nvme create-ns /dev/nvme0 --nsze=960000000000 --ncap=960000000000 --flbas=0 -dps=0
  264  2024-04-05 20:54:29 nvme create-ns /dev/nvme0 --nsze=960000000 --ncap=960000000 --flbas=0 -dps=0
  265  2024-04-05 20:54:43 nvme delete-ns /dev/nvme0 -n 4
  266  2024-04-05 20:54:46 nvme delete-ns /dev/nvme0 -n 3
  267  2024-04-05 20:54:48 nvme delete-ns /dev/nvme0 -n 2
  268  2024-04-05 20:54:50 nvme delete-ns /dev/nvme0 -n 1
  269  2024-04-05 20:54:56 nvme create-ns /dev/nvme0 --nsze=960000000 --ncap=960000000 --flbas=0 -dps=0
  270  2024-04-05 20:55:22 nvme attach-ns /dev/nvme0 --namespace-id=1 -controllers=0x1
  271  2024-04-05 20:55:27 nvme attach-ns /dev/nvme0 --namespace-id=2 -controllers=0x1
  272  2024-04-05 20:55:31 nvme attach-ns /dev/nvme0 --namespace-id=3 -controllers=0x1
  273  2024-04-05 20:55:37 nvme attach-ns /dev/nvme0 --namespace-id=4 -controllers=0x1
  274  2024-04-05 20:55:41 nvme list
  275  2024-04-05 20:56:20 nvme id-ns /dev/nvme1
  276  2024-04-05 20:57:23 nvme delete-ns /dev/nvme0 -n 4
  277  2024-04-05 20:57:26 nvme delete-ns /dev/nvme0 -n 3
  278  2024-04-05 20:57:29 nvme delete-ns /dev/nvme0 -n 2
  279  2024-04-05 20:57:31 nvme delete-ns /dev/nvme0 -n 1
  280  2024-04-05 21:01:47 nvme id-ctrl /dev/nvme1
  281  2024-04-05 21:04:15 nvme create-ns /dev/nvme0 --nsze=1875369116 --ncap=1875369116 --flbas=0 -dps=0
  282  2024-04-05 21:04:40 nvme id-ctrl /dev/nvme0
  283  2024-04-05 21:05:06 nvme create-ns /dev/nvme0 --nsze=957777707008 --ncap=957777707008 --flbas=0 -dps=0
  284  2024-04-05 21:05:18 nvme create-ns /dev/nvme0 --nsze=957777707000 --ncap=957777707000 --flbas=0 -dps=0
  285  2024-04-05 21:05:49 nvme attach-ns /dev/nvme0 --namespace-id=1 -controllers=0x1
  286  2024-04-05 21:05:54 nvme list
  287  2024-04-05 21:16:48 nvme delete-ns /dev/nvme0 -n 4
  288  2024-04-05 21:16:51 nvme delete-ns /dev/nvme0 -n 3
  289  2024-04-05 21:16:54 nvme delete-ns /dev/nvme0 -n 2
  290  2024-04-05 21:16:56 nvme delete-ns /dev/nvme0 -n 1
  291  2024-04-05 21:17:18 nvme create-ns /dev/nvme0 --nsze=1999999999 --ncap=1999999999 --flbas=0 -dps=0
  292  2024-04-05 21:17:32 nvme attach-ns /dev/nvme0 --namespace-id=1 -controllers=0x1
  293  2024-04-05 21:17:36 nvme list
  294  2024-04-05 21:23:04 nvme delete-ns /dev/nvme0 -n 3
  295  2024-04-05 21:23:06 nvme delete-ns /dev/nvme0 -n 2
  296  2024-04-05 21:23:08 nvme delete-ns /dev/nvme0 -n 1
  297  2024-04-05 21:23:29 nvme create-ns /dev/nvme0 --nsze=1875000000 --ncap=1875000000 --flbas=0 -dps=0
  298  2024-04-05 21:23:42 nvme delete-ns /dev/nvme0 -n 3
  299  2024-04-05 21:23:45 nvme delete-ns /dev/nvme0 -n 2
  300  2024-04-05 21:23:47 nvme delete-ns /dev/nvme0 -n 1
  301  2024-04-05 21:24:15 nvme create-ns /dev/nvme0 --nsze=1874000000 --ncap=1874000000 --flbas=0 -dps=0
  302  2024-04-05 21:24:33 nvme attach-ns /dev/nvme0 --namespace-id=1 -controllers=0x1
  303  2024-04-05 21:24:40 nvme attach-ns /dev/nvme0 --namespace-id=2 -controllers=0x1
  304  2024-04-05 21:24:47 nvme attach-ns /dev/nvme0 --namespace-id=3 -controllers=0x1
  305  2024-04-05 21:24:51 nvme attach-ns /dev/nvme0 --namespace-id=4 -controllers=0x1
  306  2024-04-05 21:24:55 nvme list
  307  2024-04-05 21:41:24 nvme delete-ns /dev/nvme1 -n 1
  308  2024-04-05 21:42:11 nvme create-ns /dev/nvme1 --nsze=1874000000 --ncap=1874000000 --flbas=0 -dps=0
  309  2024-04-05 21:42:31 nvme attach-ns /dev/nvme1 --namespace-id=1 -controllers=0x1
  310  2024-04-05 21:42:37 nvme attach-ns /dev/nvme1 --namespace-id=2 -controllers=0x1
  311  2024-04-05 21:42:42 nvme attach-ns /dev/nvme1 --namespace-id=3 -controllers=0x1
  312  2024-04-05 21:42:48 nvme attach-ns /dev/nvme1 --namespace-id=4 -controllers=0x1
  313  2024-04-05 21:42:53 nvme list
  314  2024-04-05 21:43:11 nvme delete-ns /dev/nvme2 -n 1
  315  2024-04-05 21:43:59 nvme create-ns /dev/nvme2 --nsze=1874000000 --ncap=1874000000 --flbas=0 -dps=0
  316  2024-04-05 21:44:21 nvme attach-ns /dev/nvme2 --namespace-id=1 -controllers=0x1
  317  2024-04-05 21:44:28 nvme attach-ns /dev/nvme2 --namespace-id=2 -controllers=0x1
  318  2024-04-05 21:44:34 nvme attach-ns /dev/nvme2 --namespace-id=3 -controllers=0x1
  319  2024-04-05 21:44:39 nvme attach-ns /dev/nvme2 --namespace-id=4 -controllers=0x1
  320  2024-04-05 21:44:46 nvme list
  321  2024-04-05 21:45:00 nvme delete-ns /dev/nvme3 -n 1
  322  2024-04-05 21:45:45 nvme attach-ns /dev/nvme3 --namespace-id=1 -controllers=0x1
  323  2024-04-05 21:45:58 nvme create-ns /dev/nvme2 --nsze=1874000000 --ncap=1874000000 --flbas=0 -dps=0
  324  2024-04-05 21:46:06 nvme create-ns /dev/nvme3 --nsze=1874000000 --ncap=1874000000 --flbas=0 -dps=0
  325  2024-04-05 21:46:16 nvme attach-ns /dev/nvme3 --namespace-id=1 -controllers=0x1
  326  2024-04-05 21:46:22 nvme attach-ns /dev/nvme3 --namespace-id=2 -controllers=0x1
  327  2024-04-05 21:46:27 nvme attach-ns /dev/nvme3 --namespace-id=3 -controllers=0x1
  328  2024-04-05 21:46:32 nvme attach-ns /dev/nvme3 --namespace-id=4 -controllers=0x1
  329  2024-04-05 21:46:37 nvme list
  330  2024-04-05 21:56:02 zpool status -v
  331  2024-04-05 21:56:20 zpool iostat -v
  332  2024-04-05 21:57:04 cd /mnt/storage-pools/qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe/
  333  2024-04-05 21:58:30 zfs set atime=off qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe
  334  2024-04-05 21:58:41 zfs get qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe
  335  2024-04-05 21:58:51 zfs get all qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe
  336  2024-04-05 21:59:53 history | grep fio
  337  2024-04-05 22:01:40 clear
  338  2024-04-05 22:01:49 fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  339  2024-04-05 22:03:46 fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  340  2024-04-05 22:04:58 fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  341  2024-04-05 22:06:13 fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  342  2024-04-05 22:07:30 fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  343  2024-04-05 22:08:45 fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  344  2024-04-05 22:10:01 fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  345  2024-04-05 22:11:06 fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  346  2024-04-05 22:12:21 fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  347  2024-04-05 22:16:13 zfs get all qs-1612333a-0e2f-ba76-d799-5b43a58e643b/NVMe
  348  2024-04-05 22:16:43 clear
  349  2024-04-05 22:16:57 fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  350  2024-04-05 22:18:12 fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  351  2024-04-05 22:19:23 fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  352  2024-04-05 22:20:37 fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  353  2024-04-05 22:21:54 fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  354  2024-04-05 22:23:07 fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  355  2024-04-05 22:24:21 fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  356  2024-04-05 22:25:32 fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  357  2024-04-05 22:26:44 fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  358  2024-04-05 22:28:25 zpool iostat -v
  359  2024-04-05 22:30:29 zpool status -d
  360  2024-04-05 22:30:37 zpool status -i
  361  2024-04-05 22:30:51 zpool status -L
  362  2024-04-05 22:33:28 cd /
  363  2024-04-05 22:37:53 zpool status -L
  364  2024-04-05 22:38:10 zpool status -v
  365  2024-04-05 22:39:22 zpool iostat
  366  2024-04-05 22:39:31 zpool iostat -v
  367  2024-04-05 22:42:54 cd /mnt/storage-pools/qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4
  368  2024-04-05 22:42:55 ls
  369  2024-04-05 22:43:27 zfs set atime=off qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4
  370  2024-04-05 22:43:35 zfs get all qs-8fe2f9fd-1f01-4505-6965-0fcb28b49def/NVNe-R10-NS-4
  371  2024-04-05 22:46:02 clear
  372  2024-04-05 22:46:09 fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  373  2024-04-05 22:47:48 fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  374  2024-04-05 22:48:49 fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  375  2024-04-05 22:49:50 fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  376  2024-04-05 22:50:51 fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  377  2024-04-05 22:51:51 fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=128K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  378  2024-04-05 22:52:52 fio --name=fiotest --filename=fio.test --size=100Gb --rw=randrw --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  379  2024-04-05 22:53:53 fio --name=fiotest --filename=fio.test --size=100Gb --rw=read --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  380  2024-04-05 22:54:54 fio --name=fiotest --filename=fio.test --size=100Gb --rw=write --bs=4K --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
  381  2024-04-05 22:57:40 root@quantastor:/# zpool iostat -v
  382  2024-04-05 22:57:47 zpool iostat -v

ok, well, in anyone wants to have the data and do something cool with it, there it is.

Hoping @wendell doesn’t get alerts for large posts >_> , don’t want to catch an auto ban

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.