So i’m experiencing some super weird issues with my pool of 6 SSDs in RAID 0. I seem to be unable to get write speeds above ~560MB/s and read speeds above 600MB/s over SMB. I’ve done a ton of reading and testing but I can’t seem to figure out whats going wrong.
Some background context.
The drives are Segate 1200 SAS3 SSDs.
They are connected to a Dell MD1220 JBOD enclosure. This enclosure has SAS2 hardware and two disk controllers with 8 lanes total. (4 each)
The controller is connected to an R720 server via an LSI 9207-8e HBA in IT mode (latest P20 firmware)
FreeNAS is inside a VM in ESXi. The VM has the HBA passed through and sees the disks.
Putting one drive in a stripe vdev by it self yields around 570MB/s on Crystal disk mark. Adding more drives to the array in a stripe (effectively raid 0) does not improve the speeds at all.
I’ve tested my 10gig network and it has enough bandwidth on iperf to not be causing any issues.
Unsure how to proceed. Best case is that virtualisation of FreeNAS hurts ZFS performance. Worst case is that the SSDs are not compatible with the quite old JBOD array.