i'm giving the VGA passthrough another shot.
Host: Arch. No special kernel.
Host GPU: evga gtx 960
Guest gpu: sapphire 7870
mainboard: msi x99a sli plus
cpu: i7 5820k
i added "intel_iommu=on and pci-stub.ids=xxxx:xxx, xxxx:xxxx".
pci stub work because the GPU's used kernel module is vfio pci now.
virt-manager also doesnt complain about anything when creating the VM.
After installation and during there is no ooutput of the passed GPU.
In windows, under device manager ther is an unknown graphics device and some other stuff which need the VirtIO driver. After adding the driver for those, (Virtio serial and baloon smth), the 7870 is recognized, and then immediatley the VM crashes.
It says:"There was a problem. Need to reboot. Gathering info, and rebooting.
For online error search: "system_service_exception (atikmdag.sys)".
I've been searching for a solution for this "atikmdag.sys" that keeps crashing the VM, but i cannot find anything.
Anyone, any idea?
Have you removed the virtual graphics hardware from the VM like the spice server and such, I dont think you should even get a display on virt manager if you are passing through the GPU, it also might be worth trying the vfio kernel.
unfortunately, removing the virutal graphics stuff only results in no output at all.
the vfio kernel didnt help anything either.
fortunately i solved it.
i stopped using virt-manager.
virt-manager makes so many assumptions and sets so many options which are maybe unnecessary, but definitely started this problem it seems.
so i made the virtual machine with virtmanager and turned it off immediately.
i then looked into /var/log/libvirtd/qemu/vmname.log
and took the qemu start command virt-manager produced.
i compared it to other commands people posted on the internet and started to remove options that werent used anywhere else.
and stripped the start script down to a minimum.
afterwards i could install windows in the VM without an actual physical monitor, and then the graphics card driver.
after installing (actually during) the GPU driver the screen attached to the gpu turned on, and everything was running fine, where before instead of turning, on the VM blue screened.
So maybe this helps someone who encounters the same problem.