How true is this? The Linux kernel has been accidentally hardcoded to a maximum of 8 cores for scheduling since 2009

The articles basically points out a set of commits and says “well, it’s like that, and it means X, Y and Z”.

And I’m wondering if there’s more context to this.

It does sound pretty bad, the way it’s presented. But is it? I’m no kernel expert. Might be that this is entirely benign.

I think if it was as big a problem as all that, linux wouldn’t be the operating system of choice for massively parallel processors, and consistently lead Windows in multithreaded performance scaling.

4 Likes

that is a misleading title; it should be something more about how the multitasking timeslice doesn’t increase after 8.

1 Like

There is always old stuff in the code…might still be necessary but could also be obsolete or just relevant for legacy hardware. I’m not a kernel hacker but I know my scaling on cores is pretty much linear in any system I own.

From any major OS, Linux has proven to keep up with all core counts…even FreeBSD had some hiccups and Windows is always troublesome in this aspect and also had licensing paywalls for higher core counts.

Which may or may not be a problem. Sounds like that’s for scheduling more processes on the same core. Which kinda gets less relevant with increasing core counts.

And catchy titles make for good clickbait. So this title makes sense…although not necessarily constructive.

1 Like

Recent AMD desktop have 32 threads, recent AMD servers have 128 threads per CPU and often have multiple physical CPUs.

Desktop yes, for servers, the author may want to update his data.

Official comments in the code says it’s scaling with log2(1+cores) but it doesn’t.
All the comments in the code are incorrect.
Official documentation and man pages are incorrect.
Every blog article, stack overflow answer and guide ever published about the scheduler is incorrect.

While not impossible, I find this unlikely.

I’m not sure why the author suggests that this was added by an accident. 8U in this case is just a fancy(ier) way of writing 8. If they dig further, they would found this context from Linux mailing list:

https://lkml.org/lkml/2009/12/9/153

Which suggests this 8 limit was deliberate:

Based on Peter Zijlstras patch suggestion this enables recalculation of the scheduler tunables in response of a change in the number of cpus. It also adds a max of eight cpus that are considered in that scaling.

The first patch even has a bug that it uses max instead of min (which quickly pointed out).

The original context is in this thread:

https://lkml.org/lkml/2009/11/26/271

Peter Zijlstra: […] Aside from that, we probably should put an upper limit in place, as I guess large cpu count machines get silly large values.

Christian Ehrhardt: […] I agree to that, but in the code is already an upper limit of 200.000.000 - well we might discuss if that is too low/high.

Peter Zijlstra: Yeah, I think we should cap it around the 8-16 CPUs.

So this was very intentional, and even without this limit, higher core count CPUs would hit the upper limit anyway.

5 Likes

I’m not sure it stops after 8 cores, but maybe the scheduling fails to allocate well?

just started a render on my desktop machine, while watching YT

the cores don;t all hit 100% though, so there is probably something being held back somewhere

after a bit, settling down to 70%

I can fill all cores with work, so it’s definitely not a case of “wasting all cores after the 8th”

Is it optimal? No idea.

2 Likes

I reach 6400% with blender benchmark :slight_smile:

2 Likes

ohhhhhhhhhhhh

10-animals-with-self-awareness

74.0 on my 12 core Ryzen with y-cruncher
System stays responsive but less snappy. I could need some of the cores you got (and some memory channels) :wink:

Screenshot_20231114_235601

Another anecdote, it definitely scales above 8 cores :smile:

1 Like

Surprise, surprise! Hardlocked scaling limits would have been so obvious even 20 years ago. The only non-linear scaling I noticed was on the hardware side of things. And benchmarks tell me exactly that.

The problem we are having are the applications that don’t scale…I even remember a video from Wendell having to start multiple instances to hit the amount of threads provided by CPU and Linux.

AMD and Intel invest a whole lot into kernel development…they don’t want bad numbers or bottlenecks for their new flagships.

P.S.: Nice 56-core machine. 8x64GB RDIMMs? :wink:

1 Like

Not only that, Peter Zijlstra (the main person behind Linux scheduler), works for Intel. I’m sure if it couldn’t scale beyond 8 cores it’ll be fixed very quickly.

Yup, even though I don’t need this much RAM :slight_smile:

4 Likes

Isn’t there a mixup between kernel threads being limited to 8 as apposed to user threads where we can all agree Linux uses all available CPU/threads easily (as shown in some of the responses here).

Okay, looking at HN comments, I get the impression that scaling tops at 8 cores by design, because further than that, the workload does not need to be sliced further/scaled, because the resources are not as contended, and there is more room for more tasks; the system does not have to keep stopping individual tasks to fairly free up threads?

1 Like

At work, we are running servers with 4 sockets that each have 26 CPU cores with hyperthreading. That gives us, 4262 = 208 CPU threads which makes looking at htop, umm, difficult on a small screen. However, it definitely reports us maxing out all the threads on our heaviest loads. We often have 40 individual users on a server each running their own builds. Oh, and these systems have 3 TB of DDR4 RAM.

1 Like

I have not read the article, but is that 8 logical cores, or 8 full cores?

Very untrue. I have 32 core/64 thread processor and all cores are used when i use ffmpeg or hand brake


Better to have it and not need it, than need it and not have it.

2 Likes