Linux IO schedulers

Hey yall, long time lurker first time posting.

So was talking with some folks about the IO scheduler being used in one of the linux gaming distros (Bazzite uses Kyber by default) and they had done some benchmarking. I piqued my interest and I did a bit of digging because being a distro that runs on portable gaming devices I had a question on how the IO scheduler choice affects power consumption. I found this paper ;

Do we still need IO schedulers for low-latency disks?
Caeden Whitaker, Sidharth Sundar, Bryan Harris, Nihat Altiparmak
(sorry cant post a link but google finds it easy enough)

The tl;dr is the paper shows with modern SSD/NVME types the IO schedulers are introducing latency and higher power usage. The recommend for those drive types switching back to using the “none” scheduler. I’ve been running this on my own gaming handheld for about 18 days with no noticeable negative impact.

So my question is this, has anyone else done any sort of extensive testing on this themselves? Would love to see this get a full video on YT. I get that for very specific IO types a scheduler even on a Gen 5 NVME might be helpful but for most use outside very narrow parameters and use cases, its seeming like its time to retire them, or at least change the defaults.

Any thoughts?

To me the scheduler is for improving efficiency of IO which does not necessarily mean anything about power consumption, latency or throughput. It seems mostly to mean that writes to same or nearby sectors get queued together. I guess that’s a vestigial concern. From here it should be quite obvious that any scheduling introduces some overhead and therefore increases latency and power consumption.

is this the research paper you are talking about?

Yes there is a pdf avalible at hotstorage that may be easier to access. They have couple there actually. One looks more like a powerpoint and the other has the filename hotstorage23-final1.pdf and has much more data.

Another consideration, that I think may merit study would be the effect on 3d NAND. Would letting IO happening more organically result in enough of a drop in sequential IO to a given physical location in a NAND to maybe keep heat lower and stretch life a bit more?

Yeah this is no holy grail of efficiency, but I think it should be on folks radar as we move forward with faster storage that is not impeded by the physical media limitations that spawned it.

Not sure if this is about the maximizing greater power efficiency or minimizing power usage, but there’s also the latency aspect I’ve been aiming to minimize, and there are some interesting tweaks others have made for ultra-low latency storage.