Hi all I have a Dell Poweredge R7615 with the following spec:
AMD EPYC 9354P
256gb DDR5
Dual 512gb ssds in RAID1
x2 Kioxia CM6-R (Dell branding)
Intel E810 100G NIC
Win Server 2025 Datacenter
We bought this server for virtualisation of workstations with the intention of expanding storage down the line. What we have found instead is that the Kioxia drives have been very hit and miss with performance.
I have been using CrystalDiskMark to benchmark the disks/array throughout, I will just abbreviate it to CDM. If this tool is not reliable etc, I am open to suggestions
We originally were suffering extremely poor response time when we were in a VM. The x2 Kioxias were in a storage spaces RAID0, everything left to default during setup. We were using fixed size vhdx, with a Gen2 VM. When I used CDM within the VM I got these results:
Seq1MQ8T1 - 3556MB/s - 4139MB/s
Seq128KQ32T1 - 2166MB/s - 2108MB/s
RND4KQ32T16 - 391MB/s - 203MB/s
RND4KQ1T1 - 21MB/s - 22MB/s
I’d have thought that Reads/Writes would’ve been a lot faster than this since no other activity was happening on the array. I took a screenshot of the response time during a CDM run, I had seen the response time go to over 20,000+ms during this run:
So I decided to test, I will just use a single disk and put my vhdx onto there. The performance seemed to be better on the benchmark but as soon as random tests started, the response time again within the VM just fell off dramatically.
Seq1MQ8T1 - 4284MB/s - 4008MB/s
Seq128KQ32T1 - 6891MB/s - 4016MB/s
RND4KQ32T16 - 1132MB/s - 331MB/s
RND4KQ1T1 - 26MB/s - 62MB/s
So next thing I did was benchmarking the array on host in RAID0 this time. A bit more promising it seems:
Seq1MQ8T1 - 13,901MB/s - 7967MB/s
Seq128KQ32T1 - 12,819MB/s - 7976MB/s
RND4KQ32T16 - 5901MB/s - 4380MB/s
RND4KQ1T1 - 45MB/s - 301MB/s
Finally I did single disk benchmark on the host.
Seq1MQ8T1 - 4283MB/s - 4026MB/s
Seq128KQ32T1 - 6957MB/s - 4030MB/s
RND4KQ32T16 - 6174MB/s - 3160MB/s
RND4KQ1T1 - 46MB/s - 312MB/s
I then proceeded to update the firmware of the controller card and then the firmware of the disks themselves. This is where it just gets weirder…
To start with single disk benchmark. So the Random reads fell off a lot, random writes halved almost.
Seq1MQ8T1 - 6869MB/s - 4028MB/s
Seq128KQ32T1 - 6193MB/s - 4031MB/s
RND4KQ32T16 - 3545MB/s - 3074MB/s
RND4KQ1T1 - 40MB/s - 169MB/s
Next I then did RAID0 in storage spaces again. the random speeds are within margin of error and sequential results are a bit better overall but also not the expected result of meeting the pre firmware update benchmark.
Seq1MQ8T1 - 8362MB/s - 7964MB/s
Seq128KQ32T1 - 5670MB/s - 5689MB/s
RND4KQ32T16 - 3437MB/s - 3227MB/s
RND4KQ1T1 - 40MB/s - 165MB/s
Lastly I have done a RAID0 within the S160 controller card, which is software RAID btw. I don’t really know what to make of these results either.
Seq1MQ8T1 - 7388MB/s - 7586MB/s
Seq128KQ32T1 - 6479MB/s - 6538MB/s
RND4KQ32T16 - 3453MB/s - 2948MB/s
RND4KQ1T1 - 40MB/s - 167MB/s
It seems that post firmware update has just caused performance on RAID to take a massive hit. I feel like I am starting to go insane with how far im looking into this, as I started with trying different block sizes and interleave sizes to improve random performance.
If anyone has any pointers or suggestions that would be greatly appreciated, we’re all scratching our heads a bit with this one.