Full AMD System APU+dGPU setup looking into VFIO/Passthrough. Some Questions!

So I have finally a 5700G in this system and have enabled SVM and VFIO in bios. The discrete GPU I want to use on a Win10 VM via pass-through is a 6800XT however I want to keep being able to connect my two monitors (4k+2k setup) to the 6800XT and use them for Linux also while using the VM.

From my understanding somehow I would use looking glass in this setup. I don’t really want to connect a monitor via HDMI port on motherboard but I guess I could connect the 2k to the 5700G APU if need be.

I still want to use the 6800xt to game and do stuff on the Linux machine when I’m not using the VM.

Is any of this possible or do I just need to ditch Linux gaming and exclusively use the Win10 VM? I also hear its not really viable with AMDGPU atm due to major stability issues and system crashes, if that is true then its not worth me setting up.

The whole point of my setup is to boot up a Win10 VM and play specific games on it with the 6800xt, but then pass it back to Linux when shutdown (vm).

I’ve read allot about VFIO and pass-through but the situation seems to change weekly on how well it works or new better configs.

Ultimately I don’t want to spend several days setting this up to come to the conclusion that it’s a unstable crashy mess and ultimately not worth it!

I’ve been basically getting things configured in BIOS and kernel first before even trying to use a VM. (still not sure on how to configure it yet).

But now I have my iGPU connected to the 1080p secondary screen and it seems to work just fine, just like before. IOMMU & libvirt should be ready to go, don’t see no conflicts.

VR-IOU is disabled in bios as I’m told that has caused issues with AMD GPU’s. Everything else seems ready.

libvirtd.service enabled, no errors, using default.xml
my GPU shows this with lspci -nnk -d 1002:73bf
Kernel driver in use: amdgpu, Kernel modules: amdgpu
I’ve enabled these MODULES=“vfio_pci vfio vfio_iommu_type1 vfio_virqfd” however they don’t show in lsmod command; I’ve read this is normal however and things change once VM is activated and I run these commands again they will show the the Kernel driver in use will also switch to the virt driver…

Trying very hard to cross all my t’s and dot all my i’s here.

If you want to do GPU passthrough, you cannot use your dGPU in your host OS (Linux) anymore. You can only use the iGPU.

Given your situation, you have a few options. One is to use a monitor strictly for Windows and the other monitor strictly for Linux. You play a game on Windows and navigate the web on Linux and that’s it. But you will likely also need 2 sets of peripherals too, i.e. another keyboard and mouse. But you can use debauchee/barrier (kinda like Synergy, it was forked from synergy core) to share a single set of peripherals with 2 PCs (or in this case, your host OS and your Windows VM).

Another thing that you can do is to connect the motherboard DP and HDMI to both the monitors and use LookingGlass to use the Windows machine. In this configuration, you can move the Windows screen on either monitor, just like you’d use an RDP client or TeamViewer if you have previously used any of these, and use a single set of peripherals.

But you cannot use the dGPU port on your host OS, because GPU passthrough means that your host OS (Linux) will give all the control to the guest VM (Windows), so it cannot access the hardware at all anymore.

I believe I’ve seen someone on the forum working on a project to ease the use of GPU passthrough to windows VMs. It’s called “libVF .IO” - disclaimer: I never used it and I have no idea how good it is, nor can I vouch for it, but if it eases the setup of a GPU passthrough, I guess you should give it a try.

I thought I could pass it to the VM when using VM (disables on host) and then pass it back to amdgpu on Linux after VM is shutdown (enabled on host).

Otherwise it would mean having no ability to use my Linux machine for any graphical workloads like games etc…?

Anyway I got the QEMU/KVM server setup and in it a win10 img (60Gb) linked to a win10 setup iso, it works but the pass-thought stuff has not been set yet, its just a basic vm box atm until I can confirm what settings I want to select and if it can pass to and from the host (obviously not while VM is active).

I’m pretty sure passing the GPU back to host is one of the core things that has been figured out recently after reading all the threads on people doing exactly that.

I just don’t know the exact parameters and configurations needed to make it happen on my setup.

It also may be extremely unstable and not worth doing as some people have randomly commented in other threads now and again about.

My motherboard only has HDMI, this is not possible.

Here is one example of it being done: https://www.reddit.com/r/VFIO/comments/poznii/issues_switching_nvidia_gpu_back_and_forth_from/

JehTehsus in that thread can pass the dGPU back to host just fine but must restart the X server to do so. No reboot needed.

Just a matter of digging through the internet until I fine the answers I seek. Many people may be completely unaware its even possible so I don’t expect everyone to just know about passing the GPU back to host seamlessly. I do know Wendell has experience with this sort of thing but again, new info appears each month that could be helpful.

In order to use the GPU on your host, you would have to disable the AMDGPU blacklist on the 6800xt and remove the ID of your GPU from VFIO and reboot your PC. Then you’d have to enable it back and reboot your PC in order to power on the Windows VM again. If you’d want to use your GPU on Linux, in such a scenario, you’re kinda better-off dual booting, but dual booting sucks IM-not-so-humble-O. I am not entirely sure if what you are asking for is even possible.

The issue with AMD GPUs hasn’t been being able to pass them, but sending them an “off / reset” signal after the VM is powered off. Whenever one would poweroff the Windows VM, one wouldn’t be able to power it back on until the host OS was rebooted, so it would clean the GPU state. This has been fixed recently. There was a script that would enable that before the fix, but now that it’s been fixed, it should work.

Unless you are talking not about GPU passthrouh, but GPU splitting (also known as partitioning). If that’s what you want to do, I’m not sure about that. I believe it has only been possible on Firepro and Quadro GPUs. Recently nVidia allowed people to split GPUs in HyperV and some people made it work in Linux, but I don’t know how that works. I think you can partition the GPU between multiple VMs, but I reaaaally have no idea about using it both on your host and in a VM.

I think Wendell did a video on GPU partitioning relatively recently (a few months ago), I’ll have to check that.

I’m happy for my iGPU to drive Linux while the VM uses dGPU. I just want the ability to ‘reset’ the gpu back to Linux after the VM is shutdown without much fuss so I can continue doing Linux desktop stuff with a powerful dGPU instead of the slow iGPU.

I’m not expecting to use the dGPU on the VM AND the LINUX machine at the exact same time! That be crazy (but cool). Maybe one day.

The whole point of this is to avoid rebooting, if I have to reboot to re-enable the dGPU under Linux again (with it using amdgpu drivers) then the purpose of GPU pass-through is mute as I would be better off dual-booting into windows at that stage!

If you change your mind and decide you do “want to spend several days setting this up to come to the conclusion that it’s a unstable crashy mess and ultimately not worth it,” try looking into eGPU setups, gpu hotplug, and pcie hotplug. What it seems you want is to dynamic add and remove your dGPU in a manner similar enough to what users with thunderbolt gpu enclosures do with laptops that you might find something you can repurpose for your needs.

Well I have most things setup now; gotta do some mega backups so I can’t mess with it atm. Moving my NTFS partitions to btrfs finally… (ntfs pissing me off with all its little niche issues)

Note: I only used NTFS for my windows games run via proton…

It may be a option to explore in the future if prices ever come down for such a thing (like in a 100yrs time, lol)

Also I don’t understand what people are talking about when they say single GPU pass-through!.. it sounds like they pass the GPU to guest and it suspends their X desktop which they can resume after exiting the guest?! or is it assumed they must reboot at this stage? Quite confused about that method and how it works and its benefits…

If you must reboot to use a windows vm then its no different then dual-booting with win10 on its own drive, just without the issues…