Return to

How fast is KVM? Host vs virtual machine performance!



So after we went through the guide on PCI passthrough, there is a question that was raised quite a bit: what exactly is the performance difference between running games in a VM vs running them natively on the host?

This question has a much broader attachment to it; how well can KVM actually run a guest OS? What is the performance difference between the host and a virtual machine?

I ran a slew of games and benchmarks to be able accurately represent a wide variety of performance measurements. The system used for testing is my own PC:

  • Intel i7 4790k overclocked to 4.2 GHz
  • 32 GB of G.Skill DDR3 RAM running on an XMP profile for a total speed of 2133 MHz
  • EVGA GTX 980 Superclocked
  • Gigabyte Z97MX Gaming 5 motherboard
  • 256 GB Samsung 850 Pro M.2 SATA SSD connected directly to the motherboard’s onboard M.2 slot

I used Windows 8.1 Pro as a clean install with the pre-installed metro apps removed and no internet connection, this was the same setup for the virtual machine.

The host for the virtual machine was the same PC running Debian Stretch:

  • Kernel 4.8
  • OVMF for UEFI support
  • Chipset i440FX
  • 10 GB of RAM
  • 4 cores and 2 threads for a total of 8 threads (to mirror the host CPU)
  • disk format was qcow2 using writeback caching
  • disk file itself was running off of the same M.2 SSD
  • GTX 980 passed through to the VM
  • Emulated with KVM and all Intel virtualization technologies enabled

All benchmarks and games were run with identical settings.

Here is the Google Docs spreadsheet with all my data and charts.

Let’s start with CPU performance.

CPU performance is amazingly almost the same between the host and the VM. In Geekbench the host scored 4,987 on single core and the VM scored 4,718, not bad. In both the single and multi core tests the VM’s performance was 4.5% lower than the host.

Cinebench yielded relatively the same result, the host scored 817, and the VM scored 775. This results in a performance difference of 5.14%. When you think about it, that’s insane. Emulating an OS in KVM with Intel’s virtualization acceleration can achieve 96% of the host’s CPU performance. This right here is why corporations are running everything in virtual machines.

Next on to disk performance.

qcow2 is generally agreed to be the best storage format matching performance with compression. I ran Crystal Disk Mark with a test size of 2 GB. I tested both sequential read/write as well as random 4 kilobyte read/writes. The results for this are a bit skewed because of writeback caching on the host.

The sequential read speed on the host was 561 MBps, the VM was 531 MBps, the VM being about 5% slower than the host.
The opposite is true for write speeds however, the host was 407 MBps, the VM was 508, leading to the VM being about 25% faster on writes. Again this is probably due to some effects of writeback caching.

Now for the fun part, GPU performance.

This is not only the results of the real gaming performance, this is also relatively a measurement of the performance difference of the PCI bus itself. The difference in CPU performance wouldn’t have an impact here since none of these games are CPU intensive. The performance difference here is a result of PCI passthrough via KVM, the host handing off a PCIe device to a virtual machine.

Starting with Unigine Heaven, this should be the most accurate representation of all the GPU tests. The benchmark runs exactly the same every time and reports specific data with each run.
In Unigine Heaven the host ran at an average of 62.7 FPS, the VM ran at an average of 61.1 FPS for a performance difference of 2.5%. This just like the CPU results is astonishing. KVM can run a guest OS at 95-98% of the host’s native performance.

Bioshock Infinite only saw a 0.74% difference in performance,

CS:GO only saw a 1.63% difference,

and Rocket League was only a 4% performance delta.

Some games actually ran better in the virtual machine, whether that was due to something related to the better disk read speeds in the VM or another anomaly I can’t say.

Borderlands: The Pre-Sequel ran 8.7% faster in the virtual machine

and Metro Last Light ran 2.2% faster.

For kicks I also ran Geekbench’s GPU test and found the VM to only be 1.19% slower than the host, yet again an amazing result.

KVM alongside Intel’s virtualization acceleration technologies have come a hell of a long way to achieve 95-98% of the performance of the host.

So TL:DR; don’t worry about performance in a virtual machine. If you have virtualization acceleration enabled the real world performance difference will be negligible with KVM or QEMU.

Thanks for reading/watching! If you have any ideas for future videos let me know.

Play games in Windows on Linux! PCI passthrough quick guide
Ubuntu vs Windows - Gaming Performance!

Feel like talking my slow ass into a vm of windows ?
Boot to a linix for security,
Vm a win10 for games and junk
If windblows gets buggy i can load a state and fix it.

I7 6800k
Asus strix x99 gaming
rog strix-gtx1080-o8g-gaming

I need to grab a secondary little shitty gpu for the linux side, but could probly just use 2 diff inputs on my screen,
One for host one for vm


How times change: when I first posted the fact two years ago that some games are faster in a kvm container in linux than on bare metal windows, there was pretty much a riot. Now, as it's no longer necessary to have Windows, not even in a container, because most games come with a native linux version or a customized wrapper that outperforms KVM, people actually believe it lol.

Thanks for the test, it would also be maybe interesting to test the windows versions of games against the linux (with wrapper) version of games. I'm curious as to how much faster these are still.


Now I am sold on switching to linux


That's next week's video. I already have all the benchmarks and data recorded. ;)


Great video! I've been meaning to go VM for my gaming needs for a while now. How hard is it to do GPU passthrough with two discrete GPUs? Say an AMD card for the Linux host and an Nvidia for the VM.


Actually I think that's the easier route to go. You don't have to mess with VGA arbitration.


Appreciate your videos on this stuff mate. I'll be having to hit up your original video listed tomorrow as I start my experience moving away from gaming on Windows into Linux. Sadly none of the games I'm playing are Linux native atm.


Thanks @GrayBoltWolf for the benchmarks! I've been doing VGA passthrough for nearly 5 years now - first using Xen, now KVM. Your results reflect my experience and my benchmarks, though I did not run benchmarks on bare metal Windows installations.
Running Windows in a virtual machine has many advantages and no disadvantage I can think of (except for the additional GPU needed). Backing up Windows into a compressed gzip file and later restoring it when needed is a matter of less than 10 minutes on my machine for ~90 GiB partition. I use LVM, though.
I sincerely hope that this video and the results, together with your nice tutorial, help more people to run Linux as their main OS and Windows safely in a VM.


Very interesting... I have been thinking about doing this for a while. Does anyone know if Linux will run headless while handing the gpu to the VM? I don't want to stick another card in my box, but booting Linux headless and autostarting the VM would be wonderful. Could I then VNC to a virtual desktop without messing up the passthrough stuff?

Also, thanks for posting all of this. Quite compelling.


You could absolutely do this. Just control the host over SSH or X11 forwarding.


Honestly the only reason i haven't started doing this is because i'm not sure how well a Vive is going to cooperate being inside a VM. The thing is already initially a bear to get working with regular windows, i'm not sure how well it would work when dealing with passed through GPUs and USB ports.


This is a great article, and great news!
What about the other way around? That is Linux running on a Windows host. Windows has better support for my hardware, once it has the proper drivers.
To fully exploit my hardware I need the drivers which are only available for Windows.
I guess I need another laptop. That’s too bad, I kind of liked it.
Or maybe PCI passthrough can help. Can I do PCI passthrough to any device, such as SD card reader?


That’s similar to what I’m doing. It’s quite easy assuming that you have hardware that supports it. The the first thought that most people have is to get a weaker GPU for Linux, and a stronger GPU for windows. I’ve done the opposite of this, as I would rather play games in Linux, and I’ve found that there are very few games that I actually have to use my VM for. I would however suggest trying the AMD card in the VM instead of the host, nvidia’s driver stops working if it detects the VM. That said, you can still make it work without too much trouble, as is shown in these benchmarks.



I have succesfully got my VM running but I notice that games are extremely laggy. I am pretty sure it is a CPU problem, when I look at the host VM GUI (I use QEMU + Virt-manager) I can see that the CPU is not even utilized 20% even though I know the game is very heavy on CPU usage.

Could you give me some tips on how to optimize the CPU throughput?


Forgive me if this was already answered elsewhere, but: is this strictly desktop-class hardware only? It’s my dream to get something like this working on my 8750H 1070 MSHYBRID laptop. The biggest roadblock(s) I have right now is that there doesn’t seem to be any effective means of thermal management in Linux for my hardware. I need to be able to create custom fan curves and be able to have temp readout overlays since the 8750H gets so easily carried away. A decent ThrottleStop analog wouldn’t hurt either.

Is this just too tall of an order right now? Do I just need to give in and pursue a career in programming?