Return to Level1Techs.com

First time GPU passthrough - How do I pin CPU (1950X)

#1

Hi,

I’m really new to GPU passthrough and Linux. Now I’ve two questions, because I really don’t understand a few things:
(Things I’ve done so far: https://pastebin.com/xMfd1SzW

My System:

  • Fedora 29 / Windows 10
  • TR 1950X
  • ROG Zenith X399
  • 4x 8GB 3200MhZ CL14 RAM
  • GTX 970 (Host)
  • RTX 2080 (Guest)
  • 2x 970 EVO NVMe

1.:
I really don’t understand how to pin my CPU. I’ve seen a lot of sample configurations, but I don’t understand the functionality of “emulatorpin” and “iothreadpin”. So I can’t configure them correctly.

This is my current configuration: win10.xml
CPU informations: lscpu
lstopo:

My goal is to get a 12 core 24 thread system, so this would be my approach for the first lines:

<vcpu placement='static'>12</vcpu>
<iothreads>24</iothreads>
<cputune>
  <vcpupin vcpu='0' cpuset='0'/>
  <vcpupin vcpu='1' cpuset='6'/>
  <vcpupin vcpu='2' cpuset='1'/>
  <vcpupin vcpu='3' cpuset='7'/>
  <vcpupin vcpu='4' cpuset='2'/>
  <vcpupin vcpu='5' cpuset='8'/>
  <vcpupin vcpu='6' cpuset='3'/>
  <vcpupin vcpu='7' cpuset='9'/>
  <vcpupin vcpu='8' cpuset='4'/>
  <vcpupin vcpu='9' cpuset='10'/>
  <vcpupin vcpu='10' cpuset='5'/>
  <vcpupin vcpu='11' cpuset='11'/>

But how do I have to configure the remaining lines?

  <emulatorpin cpuset='0-1'/>
  <iothreadpin iothread='1' cpuset='0-1'/>
</cputune>

2.:
So far I have a running VM with win10, and my RTX2080 is displayed in device manager. But it’s showing the code 43. I thought adding <hidden state='on'/> would remove this error…
My configuration: win10.xml
Is there something I’m missing?

I would be really grateful for any kind of help!

0 Likes

Bigger performance hit on Linux - AMD or nVidia?
#2

For the code 43 fix you also need to add vendor_id state='on' value='whatever'
In the HyperV section

Edit-formatting

1 Like

#3

Code 43 is away. Thank you very much for your help!
So just my first problem is still relevant.

1 Like

#4
0 Likes

#5

Thank you very much for the links! So I’m going for a 8 core 16 thread VM I guess.

So far I’ve switched to local mode in BIOS, and lstopo shows my NUMA nodes now:

After that I followed the steps from the second link you’ve posted:

NODE=0
echo 4096 > /sys/devices/system/node/node${NODE}/*/hugepages-2048kB/nr_hugepages

and changed my config to this: win10.xml

Now I’m getting bluescreens with “System thread exception not handled”.

0 Likes

#6

For hugepages I allocate the RAM in grub:

GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=12 nohz_full=0-5 rcu_nocbs=0-5"

If you have plenty of RAM it will give you less headaches.

I set nohz_full & rcu_nocbs on the cores I pass through to the Windows VM to make them tickless. Some guides mention using isolcpus but that is now deprecated in the kernel.

This VFIO performance thread was also helpful.

Your libvirt XML does not seem to be using a virtio-scsi disk controller - this is needed for iothreads to work. You need to add a scsi controller in virt-manager & set your disks to use it.

0 Likes

#7

Thanks for the help so far!

I added one SCSI controller, a small VirtIO drive and started the VM.

Three unknown devices appeared in Device Manager:
2x SCSI-Controller
1x PCI-Device

I then installed Balloon, viostor and viocsci drivers for them.

I could also install the NetKVM driver to my network adapter.

I checked every Hardware ID to install more drivers as listed in the article below. Without luck.
https://access.redhat.com/articles/2470791

So those four drivers were the only ones I was able to install.

After that I restarted, changed my boot drive to VirtIO and changed my config.

This is the config that’s running fine - but with a lot of random stuttering and without CPU pin:
https://paste.fedoraproject.org/paste/Ql7FyTVqkSXK8AWqmWG8pw

After adding

  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
  <vcpu placement='static' cpuset='0-15'>8</vcpu>
  <iothreads>2</iothreads>
  <iothreadids>
    <iothread id='1'/>
    <iothread id='2'/>
  </iothreadids>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
    <vcpupin vcpu='4' cpuset='8'/>
    <vcpupin vcpu='5' cpuset='9'/>
    <vcpupin vcpu='6' cpuset='10'/>
    <vcpupin vcpu='7' cpuset='11'/>
    <emulatorpin cpuset='4,12'/>
    <iothreadpin iothread='1' cpuset='5,13'/>
    <iothreadpin iothread='2' cpuset='6,14'/>
  </cputune>
  <numatune>
    <memory mode='strict' nodeset='0-1'/>
  </numatune>

  <cpu mode='host-passthrough' check='partial'>
    <topology sockets='1' cores='4' threads='2'/>
    <numa>
      <cell id='0' cpus='0-7' memory='8' unit='GiB' />
    </numa>
  </cpu>

This error appears:

Error starting domain: internal error: process exited while connecting to monitor: 2019-02-25T21:30:46.969606Z qemu-system-x86_64: This family of AMD CPU doesn’t support hyperthreading(2). Please configure -smp options properly or try enabling topoext feature.

2019-02-25T21:30:46.973474Z qemu-system-x86_64: unable to map backing store for guest RAM: Cannot allocate memory

Traceback (most recent call last):

File “/usr/share/virt-manager/virtManager/asyncjob.py”, line 75, in cb_wrapper

callback(asyncjob, *args, **kwargs)

File “/usr/share/virt-manager/virtManager/asyncjob.py”, line 111, in tmpcb

callback(*args, **kwargs)

File “/usr/share/virt-manager/virtManager/libvirtobject.py”, line 66, in newfn

ret = fn(self, *args, **kwargs)

File “/usr/share/virt-manager/virtManager/domain.py”, line 1420, in startup

self._backend.create()

File “/usr/lib64/python3.7/site-packages/libvirt.py”, line 1080, in create

if ret == -1: raise libvirtError (‘virDomainCreate() failed’, dom=self)

libvirt.libvirtError: internal error: process exited while connecting to monitor: 2019-02-25T21:30:46.969606Z qemu-system-x86_64: This family of AMD CPU doesn’t support hyperthreading(2). Please configure -smp options properly or try enabling topoext feature.

2019-02-25T21:30:46.973474Z qemu-system-x86_64: unable to map backing store for guest RAM: Cannot allocate memory

My main problem is really the steady shuttering in my VM. I thought that’s caused by not pinning the CPU.

0 Likes

#8

The clue here is topoext:

Starting with QEMU 3.1 the TOPOEXT cpuid flag is disabled by default. In order to use hyperthreading(SMT) on AMD CPU’s you need to manually enable it:

<cpu mode='host-passthrough' check='none'>
<topology sockets='1' cores='4' threads='2'/>
<feature policy='require' name='topoext'/>
</cpu>
  • You should go through the Arch Liniux WIki on OVMF from top to bottom.

  • Also allocate the RAM in /etc/default/grub as I have done above & as root run update-grub & reboot to fix the memory errors.

1 Like

#9

Hi, that’s probably a good idea… I thought I‘ve read the whole guide already.

I‘ll do that again, and try everything from the beginning.

One thing I‘ve discovered yesterday is: adding „host-passthrough“ to the config causes my Bluescreens.

I will report back!

0 Likes

#10

Re-reading threadripper vfio notes from tripleback.net he allocates the memory slightly differently so the RAM allocated is directly connected to the numa node cores that are passed through.

Another tweak I use is a libvirt hook that switches the Linux frequency governor to performance when the VM starts & when it stops switches it back to ondemand. See vfio-tools github for how to create a hook. When I get around to it I’ll send a PR to the repo.

Depending on your monitor you may be able to use the switch_displays.sh hook.

0 Likes

#11

Hey, it took me a while… But it’s working now! I don’t use CPU-pinning for now but its running pretty stable so far.
The problem was: cpu-passthrough caused bluescreens, but I was able to use it with this trick.
But after that I just got 2 cores in my VM, running at 100% all the time.
I then removed “cpu-passthrough” and chose EPYC from the dropdown menu (EPYC-IBPB was selected before). Now I can use all my cores in my VM and it’s running much better then before! It’s really great for work, but in games I’m getting some stuttering…
I’ve also added the hugepage part in my grub, and to the VM config, but I’m always getting an RAM allocation error.
This is my new config file: https://paste.fedoraproject.org/paste/fzQCUT1mVJXFsS9KC6iUHw

0 Likes

#12

@JB08 - you also need to manually add the hugepages to the libvirt xml:

<memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <hugepages/>
  </memoryBacking>

You can manually edit your xml with virsh by running in a shell or script something like:

cd /etc/libvirt/qemu
sudo EDITOR=nano virsh

& then from the virsh prompt:

edit my-vm-name


You can also set kernel options for kvm by creating /etc/modprobe.d/kvm to fix the Windows blue screen as shown here


NB: the following xml is for an AMD FX 8370 using 8 cores without smt & is just to show how I use real-time schedulers (fifo).

For intel & amd examples with smt / hyperthreads see the Arch Wiki cpu pinning examples:

 <vcpu placement='static'>8</vcpu>
  <iothreads>1</iothreads>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
    <vcpupin vcpu='4' cpuset='4'/>
    <vcpupin vcpu='5' cpuset='5'/>
    <vcpupin vcpu='6' cpuset='6'/>
    <vcpupin vcpu='7' cpuset='7'/>
    <emulatorpin cpuset='6-7'/>
    <iothreadpin iothread='1' cpuset='6-7'/>
    <vcpusched vcpus='0' scheduler='fifo' priority='1'/>
    <vcpusched vcpus='1' scheduler='fifo' priority='1'/>
    <vcpusched vcpus='2' scheduler='fifo' priority='1'/>
    <vcpusched vcpus='3' scheduler='fifo' priority='1'/>
    <vcpusched vcpus='4' scheduler='fifo' priority='1'/>
    <vcpusched vcpus='5' scheduler='fifo' priority='1'/>
    <iothreadsched iothreads='1' scheduler='fifo' priority='99'/>
  </cputune>

To make my kernel tickless on cores 0-5 that I set as fifo in libvirt I also set in /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet scsi_mod.use_blk_mq=1 amd_iommu=on iommu=pt default_hugepagesz=1G hugepagesz=1G hugepages=16 nohz_full=0-5 rcu_nocbs=0-5..........other-settings........"

If your system is not running from an ssd / nvme you do not want scsi_mod.use_blk_mq=1 as shown above.

For further info on io schedulers see the Arch Wiki.

0 Likes

#13

Pinning CPU’s in more a thing from >16 core AMD CPUs for now. You will do fine on a 1950X.

0 Likes

#14

Of course I also added hugepages to the config!
I’m still getting this error:

error: internal error: qemu unexpectedly closed the monitor: 2019-03-15T17:44:00.015198Z qemu-system-x86_64: unable to map backing store for guest RAM: Cannot allocate memory

win10.xml

grub:
GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora-swap rd.lvm.lv=fedora/root rd.lvm.lv=fedora/swap rhgb quiet rd.driver.blacklist=nouveau iommu=1 amd_iommu=on rd.driver.pre=vfio-pci default_hugepagesz=1G hugepagesz=1G hugepages=12"

0 Likes

#15

@JB08 You are allocating 16gb RAM in the libvirt xml but only 12gb in grub

so you need:

default_hugepagesz=1G hugepagesz=1G hugepages=16 in

GRUB_CMDLINE_LINUX

@Marten - according to Red Hat cpu pinning gives the best berformance when you have more than one numa node.

1 Like

#16

Haha, thank you for your help - you’re right. Hugepages is working now.

1 Like