Kuberenetes / docker - E and P cores

Hello All,

I’m trying to gather some specific information on how Kubernetes and Docker virtualization handles E and P cores or how this is managed. Generally speaking this should be automatic based on the needs of the container however I do understand that sometimes the kernel makes things sticky.

So the short is, how to differentiate between cores when deploying workloads or is this now effectively fixed.

I’m not seeing a lot of good information on this. The kernel, from a straight workload perspective handles it fine, however when you virtualize many of the solutions tend to keep these on specific threads, like proxmox.

I don’t believe docker/kubernetes, or containerization in general behaves any different from normal applications w.r.t. the core scheduling. On my alder lake laptop, things are usually scheduled on P cores when possible, then on E cores, then on the 2nd thread on a P core.

Newer kernels probably have better awareness of the P/E core scheduling, so try to stick to those. There’s probably no need to manually set core affinities unless you have specific issues or want to keep P cores available for latency-sensitive work.

This is how things should work. I’m planning to get 13th gen Core and I’m worrying about this too.

Do you see containers that are more or less background noise (very low CPU time) scheduled on the E-cores? Or do the P-cores wake up and boost into high wattage regularly? So low load = let the E-cores do it and keep P-cores at low clocks?

k8s is super flexible,

… but I guess it’s basically “newer kernel is better”.

1 Like

Generally, I’m seeing newer kernel == better as well. What I’m not sure on is basically how this translates. Because they tend to get sticky with assigned cores. For instance with Proxmox containers will get assigned cores based on what Proxmox thinks, they’re sticky. They don’t swap by usage and you can have mixed core counts which will provide weird usage.

k8s document linked really sheds light on multi socket CPUs vs E/P cores.

Though, based on the configuration you could group P cores into the static allocation. So all other low noise/use containers can use best effort on available cores (at this point likely available P and all E cores.

I kind of wish this was more explicit to understand the behaviour.

Remembering this thread, I just watched this video regarding debian kernel 6.2 and Proxmox and explains a lot on on the topic incl. benchmarks.

That’s what I reviewed earlier and read more on it. But this doesn’t address how it works for K8 and container provisioning specifically. This is specific to Proxmox and virtualizing containers in proxmox to hide the core types.

Layering that isn’t appropriate for K8 unless you’re running an internal private cloud, where you’re using some expensive in time or money setup to manage a virtualization environment and then on top of that managing K8 clusters and nodes.

the management overhead of esxi, proxmox, kmv, openshift, etc, to run a single node provisioned to k8 seems not worth it especially for light weight clusters.