Am I screwed if both my GPUs are in the same IOMMU group? (Aorus X570 Master)

I sat the rest of the day tinkering with it, and luckily it turned out really well!

The biggest issue I had was that games were stuttering quite a bit, even though they had a high average FPS. This turned out to be because of the frequency governor on my CPU. By default it is set to “schedutil”, which apparently doesn’t respond at all to CPU activity inside the VM. I tested this by watching the CPU frequency while running prime95 inside the VM, and it wasn’t affected whatsoever. However when I ran stress -c 4 on the host, the frequency shot up to max.

Setting the “performance” governor on all my cores like this:

sudo cpupower frequency-set -g performance

Completely resolved the issue, and games now perform indistinguishably from the PC I took the GPU from :grin: Unfortunately that GPU was a GTX 1060 so it doesn’t really deliver the frame rates I want. For that reason I’ll probably keep dual booting to play games using the host Radeon RX 5700 XT card until the GTX 3080 Ti is released. (This whole endeavor has really been to prepare my PC for that.)

Another issue I had was forwarding my mouse and keyboard to the VM. I tried using the Evdev method (as described here), which worked pretty much flawlessly for desktop use, and it’s really handy to be able to pass the mouse and keyboard back and worth using a keyboard shortcut. However, as soon as I jumped ingame in Apex I noticed that any fast flicks of the mouse would be completely borked, and my crosshair would first go in the direction I was pulling the mouse, then it would jump partway back towards where it started. This is the kind of weird behavior I would expect if I was forwarding the mouse using something like Synergy.

I also had an issue with forwarding audio from the VM to PulseAudio (as described here). It worked flawlessly, the audio quality was perfect as far as I could tell and there was no stuttering or clackling, but unfortunately the delay was simply too high for games. I would estimate around 200ms delay compared to using the speakers on the monitor through HDMI directly. It’s possible this could be resolved with some configuration, I’ll probably look into that in the future.

For now, I’ve worked around both of the above issues by simply forwarding the USB devices directly to the VM (as described here). It turns out this is really simple and quite convenient. I run a script like this right after booting the VM:

#!/bin/bash

virsh attach-device wintendo /home/tomas/.VFIOinput/steelseries_siberia_1.xml
virsh attach-device wintendo /home/tomas/.VFIOinput/steelseries_siberia_2.xml
virsh attach-device wintendo /home/tomas/.VFIOinput/corsair_keyboard.xml
virsh attach-device wintendo /home/tomas/.VFIOinput/logitech_g900_mouse.xml

I have separate keyboards for gaming and working (writing code) so I can safely forward the gaming keyboard and the mouse to the VM without losing access to the host.

When I power off the VM, all the devices are automatically returned to the host.

I also tested out looking glass, which I was really exited about. I tried both the stable version and the bleeding edge version, but unfortunately the performance was nowhere near good enough for competitive games. The game’s frame rate was around 100 FPS, and the frame rate of the looking glass client was several hundred FPS, but the “UPS” counter in looking glass never managed to reach even 60 while I was ingame in Apex. (UPS is the number of frames captured and transferred to the host per second. The experience otherwise was extremely awesome and polished though. If I was playing an RTS game where frame rate wasn’t important I would definitely be using it rather than switching monitor outputs.

I plan on writing an “initiate game mode” script next weekend, which will:

  • Launch the wintendo VM
  • Forward the appropriate peripherals
  • Set the performance governor on the CPU cores I’ve pinned the VM on
  • Detach the gaming monitor from the host using xrandr so it can be dedicated to the VM (I have 2 monitors)

Then I’ll write a corresponding script that undoes all of the above and shuts down the VM. Hopefully this will all end up being more convenient than dual booting :grin:

I also want to find a performant way of binding a host directory to the client, so I can record video from the client directly to the host using OBS. Hopefully libvirtd already includes a way to do this that doesn’t involve installing software on both machines and using the network to sync. I haven’t even started looking into this though, I would appreciate any pointers.

The only other noteworthy thing I discovered that I can think of right now is that you can use logical volumes (LVM) as storage for VMs. This is what I’ve done for the Wintendo VM:

➜ sudo lvs
  LV                           VG   Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home                         main -wi-ao---- 1000.00g                                                    
  system                       main owi-aos---  100.00g                                                    
  system-snapshot-before-iommu main swi-a-s---  100.00g      system 1.38                                   
  wintendo                     main -wi-a-----  256.00g

Libvirt config:

<disk type="block" device="disk">
  <driver name="qemu" type="raw" cache="none" io="native"/>
  <source dev="/dev/main/wintendo"/>
  <target dev="sda" bus="sata"/>
  <address type="drive" controller="0" bus="0" target="0" unit="0"/>
</disk>

It’s currently attached to the VM as a SATA Disk, which I’ve read is suboptimal for performance, but it didn’t have a noticeable impact on my game loading times. I haven’t done any disk benchmarks inside the VM yet though.

The alternative is using a Virtio disk, which should be significantly faster, but according to this blog post it tends to break when running Windows update, so I’m in no hurry to switch away from the SATA approach.

I am currently using Virtio for the VM’s NIC though, which appears to be working perfectly.

I haven’t really used the VM for more than a couple of hours of gaming so far though, so the opinions I’ve expressed in this post could definitely change :slight_smile:

Edit: Played Apex in the VM for about 4 hours straight tonight, absolutely no issues! Turned my resolution down to 1280x1024 so this puny GPU can handle more than 100 fps :slightly_smiling_face: So far it’s indistinguishable from playing on bare metal.