I recently started foolin around with an old server in my lab. I’m looking to see if my performance numbers are out-of-whack because they seem low to me. I’ve built a poor-man’s all flash array using the following
A Supermicro 216 Chassis
A Supermicro x8dth-6 motherboard (SAS2008-IT onboard)
A ZVOL Mounted via ISCSI on Windows over a 10gigabit network:
As a point of reference I compared these results to my production server. The production server is faster than the all flash server? This is NOT apples-to-apples but I’m rather confused??
Dell R720
H310 IT Mode
LSI 9205-8i IT Mode
2x Xeon E5-2920s
192GB RAM
Fusion Pool with 12x4TB WD RED (CMR) drives and 2x Samsung SM953 480GB NVME SSDs
I’m going to try and re-do the testing with some 9205-IT mode cards, but since my system is PCI-E 2.0 I’m not sure if it will have an appreciable benefit. SAS2308 is generally better than SAS2008 chips in any case. But still, I was expecting closer to 3500 MB/s…at least inline with modern M.2 performance…with 3 separate 2008 controllers and 24 drives with no SAS expander I’m not sure why that’s not possible??
I can also try and swap the drives for some old 180GB Intel SSD 520s I have to see if that changes anything. The CT120BX500SSD1 SSDs I have are DRAM-less and any efficiency increase with more modern NAND vs the old 520s is probably lost to that. Worth experimenting, I wish I had a 3rd set of drives to compare against but that’s all I have to play with.
I’ve also bid on an X9DRi-LN4F so I can get some faster Ivy Bridge Xeons (E5-2667 v2?) and see if that helps after I do the above testing…
Any other thoughts are appreciated, I know this is all older hardware but I’m not sure what the biggest bottleneck is…This should be significantly faster than 12 spinning disks in a Raid-Z2 by the nature of the fact that they in a mirrors pool and they are SSDs…?
Less than with the Crucial drives, faster than 9211s.
Won the action for the LGA-2011 board. Will test with the E5-2620s that it came with (the same CPUs that are in my production server).
RAM is the same exact DIMMS as are in my production server. Experiment is on hold until that comes. If it’s not significantly faster with the new board I’m sorta at a loss??
To be sure, the circumstances were different, but as I understood it the root cause had to do with getting so many I/O interrupts so fast that the Linux kernel tripped over its own feet. Wendell says the Linux kernel has since been fixed, but something similar might be affecting your BSD system.
I wish I had some SAS3 cards/SAS3 backplane to see if performance scales JUST by moving to SAS3 hardware…but they are still pretty pricey
I think I will pickup some faster single threaded CPUs next and see what benefit that gives me. 2620s don’t exactly scream, and the X5680s I had in the other board are probably faster. It’s kinda silly, because over ISCSI it’s still only a little faster than a single SATA SSD…
To be clear, it does seem to be scaling linearly, so maybe this is as fast as these drives go?
This is a single drive:
With 11 VDEVs of 2-drive mirrors I am getting 10x the performance
I’ve also tested with all 22 drives in a single RAIDZ1
23k IOPS and 2800MB/s
Performance is pretty close to mirrors, which goes against conventional wisdom I’ve seen here. It was always my understanding that mirrors had a high cost in storage efficiency but were always faster than RAIDZ. Although, we are talking about 22 drives and only a single drive worth of parity…not exactly production ready.
Two Raid-Z1 VDEVs of 11 drives each yields the best performance/storage efficiency so far
28k IOPS and 3500MB/s.
Still, not sure if I would put into production 1 drive of parity with 11 disks.
So next I made a 20-drive ZPool with 4 5-drive vdevs in a raid-z1
Losing 2 drives as “hot spares” plus 3 for parity for a total of 5 drives is still better storage efficiency than the mirrors(which lost 11), and still faster.
26K IOPS and 3200MB/s
Finally, I made an 11-drive-per-vdev RAIDZ-2 of 2 VDEVs. This offers better storage efficiency than the 3 RAIDZ-1 VDEV arrangement.
26k IOPS and 3200MB/s still…which is still on par with the mirror array and from my test appears to be more consistent
So some takeaways…
I’m not going to break 30k IOPS anytime soon
SAS3 cards may be an answer, but I don’t honestly think they will be
I have 5 PCIE 16x slots I can put NVME drives and I’m fairly certain that I can do PCI-E bifurcation in this board so I may be able to do 10xNVME drives on this platform to see what that does.
If anyone has sone 256gb nvme drives from old laptops or something and you want to donate to the cause for science I’m here xD
Conventional wisdom also wouldn’t put 22 drives in a z1 How is the resilver time with that config? I wouldn’t like to do this on HDDs, but SSDs should still be fine although vastly higher than mirror resilver. But I’ve seen several benchmarks with HDDs and SSDs saying that Raidz1 performance is pretty much mirror speed as long as you can throw dozens of drives into the pool.
And yeah, I love those walls of fio output. I just don’t have the drive numbers to make extensive testing of differing configs myself.
In my testing of 24 drives (obviously more drives are needed to test this), 3 smaller VDEV of 5 drives each in RAID Z-1 are not faster or more storage efficient than two 11-drive ZVOLS. I think it’s safe to be + or - 20% of 10 drives per VDEV (between 8 and 12) depending on specific situations.
Join the Church! It’s awesome. Before I built my new server, I was playing around with a fist full of old USB flash drives, building (and destroying) my first pool.