Linus (and by extension, Wendall), wasn’t running any specific type of benchmark that I am aware of.
It was when they deployed the server for production use, that’s when they found this kind of an issue/error.
Do the transfer rates scale with the number of NVMe 4.0 x4 SSDs that are in your RAID0 array?
I tried googling which version of the Samsung EVO NVMe 4.0 x4 SSDs that you are using, but the only results that come up are, for example, the Samsung 980 Pro series (that’s NVMe 4.0 x4).
To that end though, assuming that you’re able to hit the peak of 5 GB/s STR write speeds all the time, that works out to be 40 Gbps.
Therefore; if there is linear scalability that adding a second Samsung 980 Pro 1 TB (I picked a capacity at random so that I can look up what the write speeds were) NVMe 4.0 x4, in theory, three of those drives in RAID0 (if you have linear scalability in the write bandwidth) would be 15 GB/s or 120 Gbps, which would exceed the 100 Gbps of 4x EDR IB.
I would be surprised if you get that kind of linear scalability in practice (or if are using “your real world data” to benchmark the RAID0 array).
I know that for my Samsung 860 EVO 1 TB SATA 6 Gbps SSDs, according to the spec sheet (Source: https://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-860-evo-2-5--sata-iii-1tb-mz-76e1t0b-am/#specs), it says that it should be capable of 520 MB/s writes (4.16 Gbps).
I know that having four of those in a RAID0 (as it is in my micro HPC cluster headnode) – the best that it has been able to muster in a “real world usage” (i.e. not a benchmark like Crystal Disk Mark) is anywhere between 800 MB/s to about 1200 MB/s or so (vs. the 2080 MB/s that it should have been capable of).
(Sidebar: I did find the Samsung 860 EVO SSD, but that’s apparently a M.2 SATA SSD, not a NVMe SSD. (Source: SSD 860 EVO M.2 SATA 1TB Memory & Storage - MZ-N6E1T0BW | Samsung US))
If you’re able to achieve the scalability, that would be awesome.
I’ve yet to accomplish that. (And my four SATA 6 Gbps SSDs are attached to a LSI/Broadcom/Avago MegaRAID SAS 12 Gbps (9341-8i) SAS 12 Gbps HW RAID HBA, so the RAID HBA shouldn’t be a problem here.)
(Sidebar #2: I’ve moved away from burning through SSDs like that because whilst it is super fast, I have found that the faster the SSD is, the faster it will be for me to wear out the write endurance limit because I’d want to use it all the time, and as a result of that, I would very quickly consume the finite number of erase/program cycles of the NAND flash chips/modules on said SSDs.)