NVMe IOPS - Two individual drives better than 2 drives in RAID0?

I’m going to be running a number of large, container based workloads that are going to require fast NVMe storage. When looking into IOPS, all the posts online are basically asking the question “Are two RAID0 drives better than a single drive?”.

I want to ask, in terms of raw IOPS performance, are two individual drives, with divided workloads, better than a RAID0 drive with all the workloads? I tried to ask this on Reddit, and the answers were all over the place.

To make this easy, I’ve added a picture and I’m wanting to know, “Which scenario is NOT TRUE?”

I’m pretty sure #2 is false, but I can’t find any resources talking about total system IOPS, just individual drive IOPS. Any nuggets of wisdom here are most welcomed :slight_smile:

P.S. This isn’t important data and does not require data redundancies. Speed is all I’m interested in, hence the RAID0.

Welcome to the forum!

In most scenarios, 2 drives in RAID0 are better than 1 drive. However, there are cases when the storage is so fast, that the communications to it are too slow to keep up and create a software bottleneck. This is what happened to chocamo ITT:

He is using Kioxia SSDs in RAID10 and he is getting lower IOPS than a single drive, doesn’t matter if it’s just 2 drives or 8 drives. Also, Wendell and Linus also faced the same issues when trying to create monster SSD arrays with fast NVME drives.

So, with that said, keep the above in mind, you could face a similar situation.

It depends. Sometimes you may get double the IOPS, sometimes you may get more, but not double and in the cases like the one mentioned above, you may get less than just using 1 drive. I would argue that if you have 2 separate applications that are IOPS hungry, to put them on different drives, so that one doesn’t bother the other. If performance is a concern, it is generally better to have dedicated resources than to have shared resources. Due to this split, you will get more consistent performance, as opposed to higher performance. Consistency is usually more important than raw performance.


3 is false so long as:

  • your controller can keep up
  • there aren’t any other bottlenecks with the OS/file system

Raid0 has the additional benefit that your IO is pooled. So app A could do 33k IOPs for example, if app B is idle. One drive per app and you’re hard capped at 22k per app.

That can also be a disadvantage though if you want to guarantee both apps an equal share (without some other method of guaranteeing storage QoS)


I think the performance in the variety of your depicted scenarios depends on several factors.

  1. Scenario 1: each app will be constrained to the IOPS of 1 storage device - that’s it, you’ll never get more than 22k.
  2. Scenario 2: each app will have opportunistically increased IOPS, depending on it’s potentially idling sibling - imagine load-ballancing - when App1 idles, App2 get’s more IOPS - each app’s opportunistic IOPS is >=22k.
  3. Scenario 3: i’m struggling with this one - why would you implement some kind of artificial rate-limiting “bottle-neck”, which limits both apps to less than the combined RAID0-IOPS?

I’m sure you’re aware of the doubled odds of catastrophic failure in RAID0, so i’m not gonna mention it, by mentioning it anyway. LOL, circular dependencies! :vulcan_salute: :nerd_face: If you can afford it, add redundancy, e.g. RAID10/striped mirrors.