Toward 20 Million I/Ops - Part 1: Threadripper Pro - Work in Progress DRAFT

Yep follow up inbound

2 Likes

Reviving this thread a little. I want to try to PoC how far consumer SSDs can be pushed with SPDK/hybrid polling, before switching to DC or Optane drives. I also want to baseline where everything is at so I don’t repeat something that has already been done before.

@wendell & @Chuntzu can you correct me if something is wrong.

  1. A fully loaded Threadripper/Milan system using a M.2 carrier cards and Samsung 980 Pros tops out at ~1.1M iops, in aggregate, when using kernel <5.15; adding more 980s doesn’t help.
  2. Kernel 5.15+ with an R9 5950X + 2x Optane will yield ~8.9M to ~10M iops/core, but no idea on aggregate upper bound for a system
  3. Kernel 5.17-rc with a 12900K + 2x Optane will yield ~13.1M iops /core (using Jens’ latest patches)
  4. Using SPDK on an E5, the upper limit of a bdev is ~25M iops w/ 4k writes

So I guess the question is: Is it worth grabbing some 980 Pro drives and an M.2 carrier paired with a 12th gen to baseline SPDK/io_uring vs libaio and try to increase the # of drives try to hit the upper limit, or is the 980 Pro physically limited to the point where there will not be any difference between io_uring and libaio? (suspect this would more likely be at the controller level than the NAND level)

The eventual question I want to get to is a $/iops number, and how that $/iops number compares between consumer SLC/TLC vs DC SLC/TLC vs Optane. I have a sneaky suspicion that Optane actually has the best $/iops number followed by consumer drives.

1 Like

Almost a necro, but turns out this is actually a wiki, so here we go.

@wendell Looks like linux kernel 6.0 is gonna have a load of performance improvements, including IO_uring

122M IOPS in 2U, with > 80% of the system idle. Easy.

Also lmao

Jens Axboe: You know, sometimes a new OS from scratch sounds appealing. But then I remember how much work that would be…

1 Like

RIP optane

F