Ordered a GPU and will be starting down the path of GPU passthrough on Linux tomorrow.
Two questions about kind of related to it.
I presume that I want the GPU for Linux in the top PCI-E slot? As in, when I install a distro with that GPU, it'll 'select 'that card?
Considering my Linux GPU will be quite weaker (RX 550 vs my main GTX 1070) if I wanted to play a game on Linux and not Windows using my GTX 1070, how would the graphics switching be handled? (Is that even possible?) I would also, of course, have two different drivers for each card.
Top card should be for linux and bottom one for windows
You will have to shutdown windows and reinitialize your GTX1070. Look into bumbleebee if you want todo this. I would reccomdend all gaming being done on windows or your 550 instead because it will save you much headache
Technically if your IOMMU groups are separated it really doesn't make a difference which slot either GPU resides in, it's not really like a boot order thing or anything like that where the first PCIe slot is initialised first, see answer to #2.
Here's the thing when you have a pass through system devices are "black listed" this black listing hides the devices from the host OS so they are not grabbed and used by the host, as far as the host system knows these devices do not exist.
When the host system is booted grub will isolate the devices you have black listed before the host boots so they are not available no matter if you are going to load the guest system or not, it is I guess possible to make a boot entry in grub that would load the black listed devices but if you forgot and tried to start your VM it would probably crash and maybe bring down the host with it.
Most people who run a VM with pass through and game on the guest have more than just a GPU passed to the guest, in my case I pass two GPUs in crossfire, a NIC, and USB3 controller, all of these devices are removed from the host and are given to the guest as physical hardware, then other devices are given to the guest as virtual hardware such as CPU cores, memory, disk space, and in my case access to a USB2 controller that is shared between the two systems.
Then.....in the guest Windows X devices are loaded just like a bare metal install of Windows, these would be a keyboard, mouse, USB sound card, Logitech G13 game pad (all connected to that USB3 controller passed through), these devices are never seen by the host system because of the previous black listing of that USB3 controller.
I hope some of this makes sense to you, if you look at this as two computers in one box that share some hardware physically, some virtually, and some totally isolated from each other you should start to understand why configuring a pass through system so sometimes it works and sometimes it doesn't based on a boot menu option can be done but the chances of it being a big PIA is very high, in some cases it's just better to do all your gaming in Windows or to just buy a better GPU for the host system so you have both bases covered gamingwise.
From my experience the card to be blacklisted (for windows VM assignment) would not work on the 1st PCI E slot. I always had to have the primary (host/linux) GPU installed in the first PCI-E slot and the "passthrough" card on my second PCI-E slot. Most boards seem to default the primary GPU as the one installed in the 1st PCI-E slot.
I tried this on an Asus and a MSI board and they both reacted the same way using the 1st PCI-E as the primary GPU so I was basically forced to install the linux GPU on the 1st PCI-e. This will not be an issue if you have multiple x16 slots. It does become an issue however if you are using a lower end chipset board, say like an AMD 970 (or intel equivalent) which usually has a primary PCI-E x16 but the second slot is x4! This would force you to use the x4 slot for your passthrough GPU which will affect performance. Note however that there are motherboards that allow you to select a "boot order" for the graphics adapter buried somewhere in the bowels of the bios.