Return to

Increasing VFIO VGA Performance



Hello @gnif ,

I to am having freezing/game crashes even though games run fine otherwise. Been troubleshooting everything from drivers to bad card.

Never thought of a ‘pinning issue or non local memory access’.

I am pinning my CPUs via Libvirt. As far as memory is concerned supposedly all my memory is allocated on the NUMA node associated with the CPUs I have pinned the VM too.

I was just curious if you just meant that CPUs need to be pinned or if pinning can sometimes causes issues and how to troubleshoot that?

This may be getting too off topic sorry if its a loaded question.


Pinning will not affect stability of the VM, only performance.

Correct, please start a new thread. I apologise if this feels like a short answer but please note that my time is limited between the projects I have on at this time and I can not help debug a qemu fault of this nature.


@gnif looks like qemu 3.1 has been released. Not sure if your patch made it, just looking at the pcie.c file. github reveals the file hasn’t been touched for at least 1 year.

In case it didn’t reach the release, can you please provide a clear cut way to apply the patch to qemu 3.1 ? Is there a catch to it ? I’m not entirely literate in C stuff but I can compile stuff from sources (like apache and php). (I’m using Fedora 29, if that matters).


I am sorry but I do not have the time at current to document how to apply the patches, nor do I need to, this information is freely available, see git am for applying patches from a mailbox file.

I am not tracking this patch as to when it will make it into qemu, I do however know it’s being actively worked on as I am seeing updates almost daily by Alex.


Went to do a new build and the patch set failed to apply, seems as of Dec 19 this patch set was committed to qemu master branch. Awesome!


Yep, lots of commits on github on the 19th December relating to PCIe link speeds. :slight_smile:

For those of us on OS’s where we aren’t able to compile and install from master (like unRaid). Am I correct in assuming we’ll see these features once we get an OS update which includes QEMU 4.0?


Unraid gets an update when it gets an update.

Typically, yes, now that it’s in master, the next step would be waiting for a stable release that your distro maintainer will build and ship.


4.0 is when it will default to using the higher link speeds, last I read however the 3.2 and later builds have these patches but you must specify the link speed. I have not checked as I have been on break however and could have the versioning wrong :slight_smile:


I applied the changes provided by @gnif by hand on the 3.1 release, it got the job done. nVidia system information reports x8 pcie 3.0 connection.

Then I tried the qemu on the master branch on github, nVidia system information reports x1 1.1 operation or something like that, just like the unpatched qemu. However, gpuz can run a basic test and determines that on load the card runs x8 pcie 3.0. Games run fine, though. Might be a power saving thing, idk.

I’m looking for a configuration example with virt-manager for the master branch (compiled it just yesterday) on how to configure speed and width of the ports. I tried the configuration mentioned by @nibbloid but I think he applied some patches not yet available on github and the vm wouldn’t boot, a popup in virt-manager says the options are not supported.


To install AMD drivers in Windows 10 I use 2 monitors:

  • monitor #2 connected normally via displayport to VM (it is also connected to host via HDMI)
  • monitor #1 connected via DVI-D to Linux host

On the Linux host I connect to the VM’s Remote Desktop (i.e over RDP) with Remmina (this makes Windows 10 logout on monitor #2). Leave Monitor #2 on the logout screen.

I then install the AMD drivers over the RDP session. The RDP session normally exits at around 55-60% driver installation progress & I just reconnect with RDP again to finish the installation.

This method will install the current 18.12.x / 19.1.1. drivers.


I have done some testing on this recently and came across your post. Windows definitely seems runs better for me when the hypervisor flag is left on.

Here is a 3Dmark bench I did with Hypervisor off:

And here is on with it on:

Almost a 500pts difference. In general, when actually using the OS I notice some slight sluggishness with hypervisor off as well (It seems to negatively impact 2D performance). With it on there’s no perceptible difference to bare metal.