The title is really bad I know but I couldn’t think of another.
At the moment I have i5-4670 with iGPU and Nvidia GTX970. I use the Nvidia for both the Linux host and a Windows VM I have but not at the same time of course. When I play a game that has Linux client and I play on Linux, otherwise I use VM.
ATM I have 2 HDMI cables coming off from my PC:
from the iGPU
from Nvidia GPU
(I also have a DVI cable comming of the iGPU to a 2nd monitor which always displays the host os - Linux)
These go into a KVM switch and then into the monitor (1 inputs from PC go into the switch and 1 goes to the monitor).
When I want to play on Linux I just enable the Nvidia driver and disable VFIO driver and leave the GTX970 basically disconnected (the KVM switch is set to iGPU). When I want to play on Windows I disable Nvidia driver and enable VFIO driver and switch the KVM to the GTX970.
So with my setup when the GTX970 is attached to Linux the GPU is somehow only used for rendering and the iGPU is used for display. When I attach the GTX970 to the VM the iGPU is used for both rendering and display in Linux and the GTX970 is only used for the VM. Don’t ask me how that works because I don’t know.
I am switching to Ryzen 7 5800X and GT210 + GTX970. My question is can I replicate such setup with Ryzen 7 5800X and GT210 + GTX970?. Or I have to constantly switch ports and primary GPU in motherboard (new motherboard is GB X570 Aorus Elite.
Also can someone explain how my current setup actually works and why I can use the GTX970 only for computation while using onboard GPU output in Linux.
Switching requires reboot. I use 2 scripts that change some config files, rebuild initramfs, and reboot. Also primary display is always set to iGPU in BIOS.
removes the blacklisting of nvidia, nvidia_modeset, nvidia_uvm, nvidia_drm from /etc/modprobe.d/blacklist.conf
removes /etc/modprobe.d/vfio.conf and /etc/modules-load.d/vfio.conf
creates /etc/X11/xorg.conf because Nvidia has trouble without one and Intel iGPU has trouble with one
rebuilds the initramfs with sudo mkinitcpio -p linux
reboot
I use Linux + Libvirt/KVM for virtualization.
I always have Monitor output from the iGPU in Linux (host OS). When Nvidia GPU is attached to the host (i.e. Nvidia driver is loaded) stuff gets rendered by it and the output is routed somehow through iGPU. Nothing is plugged in the Nvidia GPU. When Nvidia GPU is detached from the host (i.e. Nvidia driver is blacklisted and VFIO driver is loaded) iGPU does rendering and displaying for the host and Nvidia does rendering and displaying for the VM and only displays stuff through it’s own HDMI output.
Now I have not tried with 2 Nvidia GPUs on Ryzen (because CPU comes on Monday) but from what I read online I suspect it won’t work that way i.e. I’d have to switch primary GPU in BIOS and monitor output.
It sounds like you might have a setup similar to single GPU passthrough. The iGPU is just a bysstander. I’ve never tried single GPU passthrough, but I think there are some guides around and it does seem popular.
After some digging I think I found how and why my current setup is working and unfortunately it doesn’t seem to be possible without an Intel iGPU.
When I installed the Nvidia driver i ran nvidia-xconfig which generated a big xorg.conf file which I think enabled PRIME or something (the xorg.conf file only works when I use the Nvidia driver. When I use the VFIO driver I remove this file cause it messes up with the Intel iGPU). I found 2 possibilities that look like how my PC is set up: Offloading Graphics Display with RandR 1.4 and PRIME. So if that is what is happening I don’t think I’ll be able to do it with 2 Nvidia GPUs
Here is my XOrg config that I use when I run the Nvidia driver in Linux: