And the thing is I always thought of the scheduler as something the OS have by itself and it’s already there. I had no idea you could choose your scheduler.
(just for context I’m not an expert on anything related to linux or programming)
So my question is if you guys ever have faced choices / problems / bottlenecks relating specifically to the I/O scheduler. I know a lot of you guys worked on enterprise when those kinds of things can make a huge a difference.
Just curious to hear your thoughts and experiences
I haven’t had the patience to sit through the video, i guess deadline or bfq or noop gets you faster random Io on high iops nvme that cfq? Did Ubuntu switch from cfq to something better for nvme?
I used to mess with schedulers back in the day when I cared about spinning rust access latency. I just don’t anymore. At “the enterprise” where I work, all storage is cloud based and there’s schedulers there but they’re not Linux Io scheduler. … different product teams are buying iops and/or spindle time in addition to bytes, and there’s a central system to ensure fair usage in each cluster.
Also, if you really start to wonder, e.g. what is the difference between e.g. 20TB ironwolf pro and a 20TB Exos X20. turns out there’s a different firmware and scheduler within the drive that’s tuned differently for different use cases… in industry benchmarks the two drives even though they’re physically the same end up behaving differently on different benchmarks
Scrolling through the video I feel the author should take out the OS as a variable. I think he’s picking up the performance differences of the three file systems with the scheduler as an additional variable.
I have done similar tests and I am not surprised, but more people should realize that 3x throughput improvements are possible by carefully choosing your storage setup.