Hello everyone.
New to this forum but have been browsing here for a long time now. I have managed to successfully pass through 2 GPUs to 2 different guests at the same time.
Specs:
Host OS - Pop!_OS (systemd)
Guest OS - Windows 10
Mobo - ASUS Crosshair VIII Hero Wifi
CPU - AMD Ryzen 3950x
GPU 1 - RTX 2070 (boot gpu) for guest 1
GPU 2 - GTX 560 for guest 2
GPU 3 - some AMD Radeon 5k/6k (don’t know exactly) on the chipset slot for host
Using Pop!_OS How-to tutorial I got it work (granted had to improvise with systemd and initramfs script not wanting to run by itself neither on pop or on ubuntu for some reason). GTX 560 took some vbios rom patching and RTX 2070 some rom editing. 560 boots beautifully into VM with OVMF splash screen (haven’t properly tested reset) but 2070 just refuses to show that splash / boot screen all the way till Windows login screen shows up. I have disabled efi-framebuffer in systemd boot config and almost all of the devices get vfio-pci module assigned to them (except usb controller). After the system boots I have setup X11 script to use AMD Radeon graphics card for GNOME which leaves blank black screen until Windows gets to the login screen. Shutting down VM turns the screen off but powering it back on doesn’t turn the screen on till the same login screen.
While it seem like there are absolutely no side effects to this problem I was wondering if there is any way to fix this problem so that 2070 would show the splash screen / frame buffer in case it decides to fail at boot ? (like it happened once when virtmanager decided to switch to secure boot ovmf for some reason).
Thanks in advance
Edit / Update: solution is the post by me. Basic gist: it was my keyboard that also had a hub in it.