Single GPU pass-through Trails

Hello.

This is just me being persistent with the issues of not being able to pass-through a gpu after Linux has loaded up a card. My current distro is KDE Neon 5.8 LTS with kernel 4.8. I am using the nouveau driver because I read something about nvidia proprietary was a PITA when letting it card go. If possible to get input of @wendell that would be great or someone that knows what I could do next.

A previous forum post made was this. It was pointed out that kernel 4.10 and up the feature of virtual GPUs was added. I did some quick research into it, and it seemed as to push in the direction of allowing the use of the integrated GPU. There is more to it, but my intuition told me that it "wasn't" the right path. I am probably wrong but the hassle of setting up a new kernel on my distro would take away the LTS'ness of it.

So far I have reached a point where I cannot continue going forward as I have no idea what to do next. I must point out that the normal method of pass-through has been tried and it works. I followed the steps pointed out in this thread and the VM could be restarted as many times as necessary and still work. I infer from this that my GPU is "resetable friendly."

This are the travels done in the search of 1 gpu for all and all for 1 gpu:

  • Initially I tried to just run the VM with the pci device added in virt-manager. This would just blank the monitor output and render the PC useless. I would have to hard reset to get the machine functional again. SSH works but commands executed don't. The command was virsh start win10-gaming.

  • After this, I came across an article talking about unbinding the GPU before running the VM. This provided some more success, though the success did not reflect on a successful pass-through. I was able to run the VM and still SSH to the PC and undo the process, reverting back to a running system with a desktop environment. This script was used to unbind the GPU, and this was used to rebind it back. The unbind was followed by running the VM, waiting till HDD activity ceased which meant Windows loaded up. Then running virsh shutdown win10-gaming and after shutdown I would rebind the card. I couldn't tell why it wasn't working, until I looked at /var/log/syslog. It was complaining of Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff.

  • This led my hunt down another rabbit hole. I came across another article (Primary GPU workaround) talking about how the GPU bios was being modified such as to simplify its access as it was being loaded up into the system. I felt like I hit jackpot and this was the last step. The /var/log/syslog showed no more issues about ROM, but it just didn't work.

This is as far as I have come and was hoping for some input. Something else I could try. Maybe log into the VM at boot and see its output of Windows devices found, though I don't know how to do this. I gave up by this point as it was late at night as the logs point out.

I understood the concept of the card bios being modified and how it was necessary for the virgin bios which was downloaded of the card with gpu-z on a Windows partition. Still to no avail. I can't find anything that tells me why it doesn't work, logs don't cry nor whine.

I read something that mentioned the possibility of loading the GPU after the system had booted up using the integrated as primary with the objective of avoiding the dGPU bios being mucked around with. I don't remember any more where I saw this. The point behind this was to have the system treat the dGPU as the secondary that could be traded around. I haven't tried this yet because of the hassle, but I suppose a simple script could handle the binding of the dGPU and the discard of the iGPU before loading into the DE.

In retrospect the only reason I reject considering the iGPU as Linux's main is because of the limited outputs in comparison to the surround 3 monitor setup I currently have. And possibly the missing umph.