OS: CentOS7
Kernel: 4.19rc2 kernel
CPU: 2990WX
LXC: lxc-1.0.11-1.el7.x86_64
CentOS7 container created from template provided by lxc package that’s part of epel-release. Works normally without special configuration (i.e. it will run as a container and consume 100% of the cores/threads/memory on the host machine with sufficient load).
Now configure to limit to 16 “near” cores - no threads.
The objective of the following setting is to pin the container’s CPU to only actual CPUs in the Threadripper CPU and within those only those that have memory directly attached.
/var/lib/lxc/mycentos/config:
lxc.cgroup.cpuset.cpus = 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
[root@mycentos ~]# grep processor /proc/cpuinfo | wc -l
64
What am I missing here?
UPDATE/EDIT:
[SOLUTION] lxc is working as advertised in that resource scheduling if left to the kernel is obeying the lxc.cgroup.cpus value in the config file. It is NOT modifying the container’s view of /proc/cpuinfo or other information so if the application in question directly makes affinity/numactl calls with information it gathers from /proc, it will make poor choices.
This is a “feature” of lxc it seems…
This is addressed in my case by intercepting numactl calls in the container by wrapping /usr/bin/numactl with a bash script that bypasses the call to numactl and executes the requested application (arguments passed to numactl are application [application args] - so numa config is stripped and application is executed directly).
This bypass allows the container’s kernel to schedule “natively” among the CPU resources assigned by the lxc configuration.