It's interesting also to mention that a successful pass through system will include other devices being physically passed to the guest besides the GPU, in the case of my system I pass a NIC (so the host and guest have their own), a entire USB 3 controller (and the devices connected to it). The reason to blacklist these items and pass them through is to avoid virtual hardware being given to the guest. The problem with virtual hardware is who has control over that piece of hardware at any given time, host or guest? the control of virtual hardware is normally shared between the two so at any given time one or both might request usage.
In the case of the USB 3 controller it just makes life simpler, anything plugged in to the ports reacts as if its a bare metal install, ie Windows sees it and loads drivers for it to function, my USB 2 controller is shared so anything plugged into it both the host and guest see (most of the time...lol) (guest system is a little flaky about that depending on the device).
The NIC...... is just common sense, if you do anything on-line you don't want latency, sharing a NIC will cause latency in both systems as control over it is wrestled back and forth between the two.
We could also mention audio which is a very big issue if you want to game on your guest system, sharing a audio device is at best problematic, worse in most cases than sharing a NIC because modern OSs use the audio all the time as a alert for the user, not to mention anything audio related the user wants to do from music to games.
My solution was a USB sound card that connects to my guest via that USB 3 controller, I have great audio from both the host and guest but like everything in the pass through world you need another set of speakers or some way to duplicate or share that output....I use headphones since I use my guest system mostly for gaming.
My only reason for typing all of this is to maybe show or reinforce my point about a pass through system needing to be planned out from the start, it's the only way to cover all the bases and have a good experience, once you have a good working system though you will never want to go back to just one OS on bare metal, it's just too handy having both running side by side.
And before someone asks, I planned out as much as I could, but still had issues I had to over come like the audio in the guest system, I've built about 10 VMs over the last two years refining things adding hardware, moving from Win 7 to Win X (guest OS), upgrading the host from Fedora 23 up to 25 over that 2 year period (host OS), changing from Seabios to UEFI and it gets easier and easier to do, I'll admit the very first pass through I had failed because I just didn't have the knowledge needed to do it correctly but everyone since has been a success.
There are lots of things on the horizon like VirtualGPU that promises to remove the hassle, but they are a-ways off yet and will have glitches along with specific demands at first I'm sure, and while having the ability to share a GPU easily doesn't address all the other issues I kinda' allude to above it is a good first step as long as they can do it without a huge amount of latency, if latency is a issue then good ol' fashion PCI pass through will still remain as the chosen way to go which is why AMD fixing the IOMMU grouping is very important....we will have to wait and see.
Sorry to write so much and hijacking the thread... just trying to add a little insight.