the setup: Mainboard: GA-z77x-D3H (3 PCI-e Slots. Naming top middle bottom) CPU: i7 3770 1st GPU (Top slot): Sapphire 7870 2nd GPU (Bottom Slot): Lets say Firepro V4900. Might change that to something else.
So i want to run linux and passthrough the 7870 to a Windows VM and use the Firepro as the Host GPU. I use Qemu-KVM virtmanager and PCI-stub, Fedora 22 (kernel 4.0.4) Everything worked fine with the internal IntelGraphics.
Now here is the problem. I want to do it with a second GPU, the FIrepro instead of IntelHD (because 3 screens), which has to be located in the bottom PCIe-slot, otherwise both GPUs end up in the same IOMMU group. Firepro hast to be in the bottom slot because the 7870 is a 2 slot card and has no room in the bottom slot.
I did a fresh install of Fedora22 with both cards in their slots. i boot up and can use both GPUs for output as i want. I didn't install any drivers. Then i set up intel_iommu and pci-stub (with 7870 ids) so the 7870 can be passed to the VM. When i want to boot now it doesn't and just shows black screen. There is a glimpse of a command line output for a bit, but on the output of the 7870. Unfortunately i dont remember the line.
Does anyone see the problem? Does the host GPU have to be in the first/top PCIe-slot? Can i tell Bios to use the bottom PCIe slot as main one as i could with the intel graphics? I don't see such an option. Do i have to install any driver? If so, how and which?
no dumb question, i might have not seen that :) But yes they are different.
Good question, i think i tried that when i had a quadro instead of the firepro and it worked when i used the 7870 as the host and passed the quadro. But if i find some time i will try to test it again with the firepro for guest. But that would hint that it indeed is the PCI-slot configuration, right? Or what is that test going to tell me?
lets say passing the firepro to guest and 7870 in first slot for host works. haven't tried it yet, but something in my memory tells me it will. and as it seems from other forums both card can boot with uefi. at least the 7870 can because thats how i set it up on the try. then how would the uefi booting help to work around?
Usually the UEFI requirement for guest BIOS is for rebooting your guests without rebooting your host (re-initialization of the card).
From what I understand vfio or pci-stub needs to load before your radeon drivers or it will be bound to the host. I'm using vfio so I don't know how pci-stub differs, but I'm also passing through a 780 Ti and using intel graphics. I did consider getting another card in the system, I have 3 7950's and 3 R9 280X's to pick from, because there's a bunch of glitches and issues with the intel skylake graphics right now (at least for me with 3 monitors) but I haven't actually tried putting another card in yet. I'm planning to upgrade my 780 Ti with Nvidia's 2016 graphics card lineup once they have linux driver support so I may keep the 780 for a bit just to test if I can get that working for the host with the other card being passed through to the guest.
I think you may have to have two different brand cards for your use case but I haven't done much research on having more than 1 of the same brand card while also using PCI passthrough.
To clarify I meant that you can't have two nvidia/amd cards and pass one through. I could be wrong though since I could swear that with Unraid you can but I guess since there's no GUI that's why. But thank you! I will find out what happens when I hit that point since I'm pretty sure, like 90%+, that I could run an nvidia card and an AMD card with one for host and one for guest no problem. Just need to sell my surplus!
ah sorry, i missunderstood. but i'm sure you can pass through if both your cards are one brand. One of the success stories in another post is 3 r9 270x with two of them being the same card and the third one is host and the same two are passed through.
UEFI can improve guest pass through handling, but I was discussing specifically for the host. Two different use cases.
On my system, the host boots both the integrated and add-on graphics card in UEFI mode, and both will initialize and output a display at startup (pre-OS). You won't be limited to the system picking the first GPU it sees and initializing only the one card until the OS loads proper drivers - which may be the issue you described.
The VC you want to use for passthrough must be in the lower slot host card in the top windows vm card in the lower slot that's basically all I know of pci passthrough other than my cpu doesn't support it
thank you! But that would be bad news. But i think i hvae to try because i want to know at least. Unfortunately, whcih caused the whole thing, the bottom slot is restricted to one slot because the PSU is in the way. But when i have time i will move the PSU and switch the cards to test this!