This question is for Wendell - I'm trying to find a Hypervisor/Bare Metal Hypervisor that supports Nvidia gaming GPUs that I can use to run my current Windows install virtually and an email server. One that would allow me to switch between each if needed.
I have a 3820 i7 with 32GB of ram and 4 TB of hard drives set aside for this. The GPU in question is an old Nvidia GTX 570.
Update:
The motherboard in use support VT-D. It's a ASRock X79 Extreme4-M. Sadly the i7 3820 does not have an integrated GPU.
Your CPU will support VT-D which is hardware pass though you need to check you motherboard. Xen, KVM and ESXI will allow GPU pass-through to a VM but you will need two gpus your Intel HD gpu could work for the host machine and have your GTX 570 for the VM.
If you just want some GPU acceleration Virtualbox you can give 126mb of V-Ram to the VM but that is it. Bare metal hypervisors are the only way to go if you want the full power of the gpu for the VM
You can do GPU pass through with ESXi, Xen, KVM and Proxmox. Hyper-V is fantastic for home lab, mainly due to ease of use and hardware compatibility. ESXi is great if you want to learn skills that will translate into the real world if you're looking to be an a VMware ESX System's Engineer. Xen is arguably the hardest of the hyper visors to learn if you're new to hypervisors, Citrix has a foothold in corporate market and it'd be good to further your toolkit. Which leaves us with the open source KVM and Proxmox, Promox does KVM and OpenVZ. OpenVZ is a Linux-only container and KVM is full virtualization. We also have some new players like Docker which are doing high level Linux containers which are amazing for getting the most out of your resources.
ESXi can do GPU pass through, the technology is called vDGA. You're probably going to have to use KVM though, consumer grade Nvidia GPU's result in Error 43. (It has something to do with how Nvidia drivers address control bits of memory.) I also received Error 43 on the 290 in my server, ESXi hardware support is not that great you can inject Linux drivers for your hardware into the install ISO with relative ease. Hyper-V also has something called RemoteFX which works well enough for general computing, although it's just 3D acceleration. I recommend KVM if you must pass through your GPU. I'd build the stack yourself, Promox seems to have some weird installation issues if you install off a USB drive. YMMV
Here are some resources to get you started. Keep in mind this is not a noob project, if you're not well rounded in Linux you're not going to be able to diagnose roadblocks and you're just wasting your time and creating a world of hurt for yourself. Don't let that stop you though, but be prepared. Depending on your needs and whether this is homelab or production, you may want to consider buying a supported GPU for ESXi and call it a day.
you could use a switch or have one go to a different monitor. Also you will have to blacklist your the gpu you want to use for the vm so it can be assigned to KVM.
I just use two inputs on the same monitor if I have only one monitor hooked up. On most monitors, there is an input select button to scroll through the different inputs. You could for instance use HDMI for the host adapter, and displayport for the passed through client adapter. It works really well. You can also easily program the host linux system to black out the screen when the client grabs X input, and hook up the host adapter to the main or preferential input. In that case, when you switch your X input to the client, your monitor will not detect signal on the main input, and switch to the secondary, which is the client, and if you grab X input in host again, it will activate the host adapter, and your screen will switch to the preferential input again. This doesn't work on all monitors though. I can confirm that it works on my LG and NEC panels, but it doesn't work on my Samsung and EIZO panels, even though there could be differences between models, that I can't say.