Return to

Any way to move host memory to another NUMA node before VM boot?

Hello! I am running a TR 2920x with 4x 8GB of RAM in NUMA mode.

I have a WIn 10 VM with CPU pinning to NUMA node 0 and (almost) all 16GB of RAM of that NUMA node allocated to the VM.

If I have e.g. Firefox (with lots of tabs) open when I try to start my VM, it fails to boot because some of its memory is already occupied by the host.

Is there a command or script to move host memory to NUMA node 1 upon VM boot, without having to always have half of my RAM preallocated to the VM? i.e. be able to use all of the RAM when the VM is not running but moving host RAM contents to NUMA 1 when VM is about to start.

I use the script here to allocate memory for hugepages from the appropriate node


the previous solution posted will reserve the memory at boot so you can use it for the vm.
Thoe it dident go well for me so i ended up with @Pixo solution:

But when you have that working, you might not want firefox to steal cpu resources from your VM:
Your firefox instance is spawned from the process tree of your GUI, For me it is lxdm.
By limiting the resources of lxdm, Like locking it to certain cores you can control what resources it and everything that spawns from it like firefox uses.
As an Arch user i have chosten to use lxdm and it is started by systemd.
Systemd has implemented numa support both for memory and cpuaffinity and personally i’m just waiting for a release.

Despite that i have limited what cores lxdm uses, I can run stuff trough the terminal that wont be spawned from the tree of lxdm and thus wont be limited by the resource restrictions i have put on the lxdm.service

PS: we have the same cpu. But i have more memory then you, i have not tried pixos solution when giving hugepages all my memory on one node.

1 Like

Yeah I am not preallocating pages and I am using transparent huge pages, so neither solution really applies.

Your lxdm pinning sounds intresting for a “semi-automatic” approach to the problem. I don’t really launch GUI applications from the CLI but it would be a solution.

I was talking about something like migratepages .

migratepages seems to be able to do what I want for single PIDs. I wonder if a bash script could run migratepages for all active PIDs before launching the VM in order to completely free up the RAM allocated to the VM, while still being able to use that NUMA node and its RAM without user intervention when the VM is off.