Hi guys
I have a quick question to the ZFS gurus…
I have setup a simple raidz2 with 13 disks.
I am currently moving data to a volume in the pool/aggregate…
And I am curious about the way it seems to write to the disks…
It looks like some of the disks are not using as many IOPs as others are…
(I’m looking with the zpool iostat -v command)
(sorry about the formatting)
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
aggr1 17.6T 17.9T 0 3.63K 0 247M
raidz2 17.6T 17.9T 0 3.63K 0 247M
0a-11 - - 0 379 0 19.0M
0a-12 - - 0 134 0 19.0M
0a-13 - - 0 372 0 19.0M
0a-14 - - 0 139 0 19.0M
0a-15 - - 0 366 0 19.0M
0a-16 - - 0 386 0 19.0M
0a-17 - - 0 136 0 19.0M
0a-18 - - 0 379 0 19.0M
0a-19 - - 0 138 0 19.0M
0a-20 - - 0 380 0 19.0M
0a-21 - - 0 399 0 19.0M
0a-22 - - 0 373 0 19.0M
0a-23 - - 0 133 0 18.9M
---------- ----- ----- ----- ----- ----- -----
I have been working professionally with NetApp systems for 20 years, and this is why I find this strange… On a NetApp the controller will “tetris” the data in cache before it writes a stripe (over all disks in a RAID group), so all disks gets the same number of IOPs. (ideally)
The strange thing is that here it seems like its the same disks that are using less IOPs than the others… it does not switch between the disks…
So in my mind some of the disks must be doing bigger writes than the others?
Now it’s hard to show this over time in a post, but the above sample is from the sommand: “zpool iostat -v 3” so the sample is over 3 secs.
But it does not matter how high or low I set the sample rate the result is the same, namely because the values are shows as per/sec.
But the values does not even out over time…
The MB/sec. is the same… which just makes it even more strange?
So if someone can explain this please?
/Heino