Hello Level1Techs forum. I’m a long time lurker, and first time poster, so apologies if this is in the wrong section.
I’ve had a VFIO passthrough setup for the past 2 or 3 years, using Arch with a GTX970 as the hypervisor. I pass 3 out of 4 cores of my i7 Skylake (with corresponding hyperthreads) over to my Windows10 VM. My VM has direct access to raw disks via SATA passthrough, allowing dual boot. Lastly, I’m blacklisting the AMD RX480 GPU, Intel network adapter, and dedicated USB PCI hub from the linux kernel. Only the VM can touch those.
I have played many titles at >100FPS with no problems at all. Rainbow 6 Siege, Overwatch, Monster Hunter World, et cetera, all play perfectly fine.
I was unable to play the game Apex Legends in a VFIO VM without it constantly crashing (bad_module_info), micro-freezing (causing disconnects), or dumping to desktop with no message. The Windows Event log doesn’t disclose anything useful.
The VM is otherwise stable, it’s only this game that stops responding.
The odd part is that when I dual boot into Windows 10 directly (since I passthrough SATA controllers), the game runs perfectly fine. No freezing, no crashes, nothing.
The only thing architecturally different is
- Windows is running outside of qemu
- i7 has 4c/8t (over 3c/6t)
- Technically has access to the GTX970 (though it’s explicitly disabled in Device Manager).
- Direct access to 32GB of mem (instead of 16GB of hugepage-reserved mem)
- Lower DPC latency
There are many people on the EA forums complaining about the crashes on their system. The odd part is that I can make my problem go away simply by dual booting.
I’m leaning towards it possibly being related to high DPC latency, and my next troubleshooting step will be to run the game with pinned VM cores + isol_cpus.
Has anyone else had a problem similar to this before?
Does anyone here play Apex Legends in their VFIO VM?