This is mostly a response to @wendell’s call for DevOps workloads to help benchmark a PCIe 4 NVMe raid 0 build.
(not up on level1techs.com/video and forum yet?)
During all-round software/web development in teams, I don’t regularly find myself running enough linters or tests in parallel to run up against the limits of my system. The only time I find myself waiting while I’m writing or editing code is when my IDE decides it is time to re-index the whole codebase. This is indeed a pain in the ass, and I would love to hear how much less of an issue this would be with fast storage.
On the DevOps side there is much more room for improvement. Compiling C and C++ code takes up most of my docker image build time (for any project; in general). But where things really slow down are the data science-type workloads; ingesting and transforming data coming from disk or over the wire in bulk, and using that data for some data-hungry application that can only run as fast as the data can be accessed.
I’ve been working with a microservice that could be used as a benchmark of such workloads. It’s easy to run with
docker-compose and will download a bunch of data, load it into an embedded RocksDB, and start serving the data over http.
It already monitors ingestion speed, and the input data can be used to emulate realistic requests. I expect that storage speed is the bottleneck for loading with Ryzen 9 cpus and I’d be eager to learn how the NVMe array would cope with parallel loading. On the webserver side, storage speed significantly influences latency, and this has a big impact on the task I’m actually trying to achieve (using the microservice). No pre-emptive caching is employed, so as long as queries are not repeated, this should allow a fair benchmark.
Of course, even if you don’t have the “ultimate devops workstation” that Wendell has, I would still love to hear how this workload does on your system! I can help out with a script that makes unique valid requests to the webserver/db.
I’ll provide more info when prompted (and after I’ve slept)