hugepages+NUMA possible?

i have a NUMA system and i’d like to use hugepages for my VM. bu when i enable hugepages, it seems to split the requested amount of hugepages across both nodes. is it possible to force ALL of the hugepages to a specific NUMA node?

1 Like

I thought huge page was largely obsolete, given advancements in qemu and the kernel itself?

I looked into it after upgrading my system to 512GB of ram, but couldn’t find any documents for modern kernel releases (Linux 5.x)

I have adapted all of my VMs using numatune in the xml, so they are assigned memory pages from the same numa node as their pinned CPUs, and fail to start if that memory is not available.

2 Likes

id be interested to see the XML setup for that.

Interesting, I wasn’t really familiar with numatune.

So far this seems to be the best documentation

3 Likes

It’s nothing particularly exotic. There’s more tuning that can be done if your VM is io intensive, but nothing I run is.

  <memory unit="KiB">16777216</memory>
  <currentMemory unit="KiB">16777216</currentMemory>
  <vcpu placement="static" cpuset="12-15,44-47">8</vcpu>
  <iothreads>2</iothreads>
  <cputune>
    <iothreadpin iothread="1" cpuset="12,44"/>
    <iothreadpin iothread="2" cpuset="13,45"/>
  </cputune>
  <numatune>
    <memory mode="strict" nodeset="0"/>
  </numatune>
1 Like

and scrap all that, apparently no longer needed

hugepages is enabled automatically (at least in Fedora)

1 Like