Here’s one
Benchmarking drive speeds in a more intelligent way. Currently people try to copy paste things they get from google, completely ignorant of the caveats, gotchas, and considerations for workloads, which you only come across after really burying yourself into it.
Basically try to organize a set of consumer/prosumer oriented latency and throughput tests that can help more realistically compare performance across ZFS, btrfs, EXT4, XFS, VM storage, etc, keeping in mind the pitfalls each might throw at you, like the different caching layers used by Linux, KVM/QEMU, or ZFS’s ARC turning your read test into one of ram, not your drive if you aren’t careful. Or things like how sync writes behave in certain situations can greatly complicate whether your test is meaningful.
I suppose what also goes along with this, is a tutorial on how to figure out what kind of workload you are actually doing.
A secondary thought is a tutorial on how to dig into potential system performance issues in general for someone completely unfamiliar with such things. Things like generating flame graphs and zooming in on source code to figure out where an issue is. A very inspiring example is this story, where ZFS had no pools and wasn’t being used, and was using +30% CPU: ZFS Is Mysteriously Eating My CPU.