Since I'm having some major issues with the stability of unraid, I'm going back to an Arch host. It seems to be working alright, but I'd really like to get my GTX 660 working on the host. For now, I'm going to be working on getting the R9 380 working on the guest. It seems like QEMU (or OVMF?) isn't initializing the device properly. I'm getting green and pink pixels when I start it, and it's going to a really disgusting green color background once it starts booting windows. Everything is in 800x600 resolution (1080p tv).
I'm thinking I may have a bad version of OVMF, or my configuration is bad.
As a note: I got my OVMF code from here This appears to be a nightly build, so that could definitely be the cause. I'm going to look for a more stable variant.
Here's what I've put for my configuration
qemu-system-x86_64 \
-serial none \
-parallel none \
-nodefconfig \
-enable-kvm \
-name Windows \
-cpu host,check \
-smp sockets=1,cores=2,threads=2 \
-m 8192 \
-device ich9-usb-uhci3,id=uhci \
-device usb-ehci,id=ehci \
-device nec-usb-xhci,id=xhci \
-rtc base=localtime \
-vga none \
-net bridge,br=bridge0 \
-net nic \
-device ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1 \
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=02:00.1,bus=root.1,addr=00.1 \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/home/sgt/my_vars.fd \
-drive file=/home/sgt/windows10-test.raw,format=raw \
-cdrom /home/sgt/Downloads/virtio-win-0.1.112.iso \
-boot order=d \
I'm still puzzling over this, but the minute I get some progress, I'll update.
I've found that one piece that is immensely helpful in increasing the stability of a system is this:
-device ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1
Which appears to be some sort of PCIe controller, which allows you to attach PCI devices you're passing through to a virtual pci controller in the guest machine. Otherwise, it will attach your GPU to one of the USB controllers, which windows doesn't like, and Nvidia REALLY doesn't like. This along with -hyperv off
can get you a long way in getting nvidia drivers working under a windows VM.
EDIT: now I've found that when the windows VM doesn't have graphical issues, the Linux host does. The odd part is that the host is using the Intel IGPU, so it's not a power issue. (850W psu anyways, so it should be able to handle 2 mid-range gpu's and 3 hard drives) I'm starting to think it's an issue with the way the memory addresses are handled on the IOMMU, because it looks like the IGPU memory gets corrupt or there's some sort of overflow or something, causing the video to be messed up.