Any way to move host memory to another NUMA node before VM boot?

Hello! I am running a TR 2920x with 4x 8GB of RAM in NUMA mode.

I have a WIn 10 VM with CPU pinning to NUMA node 0 and (almost) all 16GB of RAM of that NUMA node allocated to the VM.

If I have e.g. Firefox (with lots of tabs) open when I try to start my VM, it fails to boot because some of its memory is already occupied by the host.

Is there a command or script to move host memory to NUMA node 1 upon VM boot, without having to always have half of my RAM preallocated to the VM? i.e. be able to use all of the RAM when the VM is not running but moving host RAM contents to NUMA 1 when VM is about to start.

I use the script here to allocate memory for hugepages from the appropriate node

2 Likes

the previous solution posted will reserve the memory at boot so you can use it for the vm.
Thoe it dident go well for me so i ended up with @Pixo solution:

But when you have that working, you might not want firefox to steal cpu resources from your VM:
Your firefox instance is spawned from the process tree of your GUI, For me it is lxdm.
By limiting the resources of lxdm, Like locking it to certain cores you can control what resources it and everything that spawns from it like firefox uses.
As an Arch user i have chosten to use lxdm and it is started by systemd.
Systemd has implemented numa support both for memory and cpuaffinity and personally i’m just waiting for a release.

Despite that i have limited what cores lxdm uses, I can run stuff trough the terminal that wont be spawned from the tree of lxdm and thus wont be limited by the resource restrictions i have put on the lxdm.service

PS: we have the same cpu. But i have more memory then you, i have not tried pixos solution when giving hugepages all my memory on one node.

1 Like

Yeah I am not preallocating pages and I am using transparent huge pages, so neither solution really applies.

Your lxdm pinning sounds intresting for a “semi-automatic” approach to the problem. I don’t really launch GUI applications from the CLI but it would be a solution.

I was talking about something like migratepages .

migratepages seems to be able to do what I want for single PIDs. I wonder if a bash script could run migratepages for all active PIDs before launching the VM in order to completely free up the RAM allocated to the VM, while still being able to use that NUMA node and its RAM without user intervention when the VM is off.

Bump?

Well, here is a rough script that does it:

#!/bin/bash

ps -A | awk 'FNR == 1 {next} { print $1 }' | while read line; do
migratepages $line 0 1
done

This takes the PID of every process running and moves its RAM pages from node 0 to node 1. You can change 0 1 to 1 0 depending on if you are passing through cores from node 0 or node 1.

You might get some invalid argument outputs but it works.

Edit: Also, automating the script to run on VM boot. Add the file /etc/libvirt/hooks/qemu with these contents and make it executable:

#!/bin/bash

if [[ $1 == "win10" ]] && [[ $2 == "prepare" ]]
then
  sh /home/.scripts/numa_win10.sh
fi

Edit “win10” and the path for the migratepages script accordingly. You could probably replace the sh /home/.scripts… line with the original script mentioned above but I like to have my scripts consodilated in one place so I am running it with sh.

Have you checked out: https://github.com/spheenik/vfio-isolate