Looking Glass dual VMs on dual-GPU card


Linux noob here, but with ambitions. Long-ish story below to describe the motivations for the problem.

I’m in the process of building a 4-GPU VM machine for a LAN party box. My friends and I have a tradition of doing a LAN once or twice a year, but since most of the gang is traveling, we have had to scrounge for computers the last few times or just do without.

My solution, inspired by the LTT and Level1 videos on Unraid/VFIO/PCIe passthrough, is to do a 4x gaming VM setup for a nice LAN-in-a-box, each VM (Windows 10) with its own GPU and mouse/keyboard/usb/monitor…and that’s where it’s getting a little complicated.

My plan was to use an EVGA SR-2 (dual socket mobo, 7 pcie, plus overclocking support) because A) cheapish B) know the X58 platform C) cool factor. It will happily support 4x GPU but in doing so it blocks the remaining 3 unused PCIe slots. I’d like to be able to run a USB3 pcie card and an SAS HBA at minimum (former is because there is insufficient USB, latter is because SATA 2.0 support through the ancient chipset)…which physically cannot be done with 4x dual-slot GPUs. The board has its own issues with iffy virtualization support (e.g., NF200 PCIe bridges) but am going to try anywya.

So…that brings me to my next brainwave-- what about a dual-GPU card, e.g. the R9 295x2? I would be able to free up a PCIe slot pair if I can use the dual GPU as separate GPUs. The 295x2, for those who don’t know, is a dual-GPU AMD card that is essentially two individual R9 290x cards cross-fired together via an on-board Pcie multiplex chip.

Some furious googling later and I learned that the R9 295x2 might be an excellent candidate as each GPU (and audio driver!) has its own IOMMU group AND its own device. Apparently I can’t link to the post on the unraid forums showing that.

However, and this is where Looking Glass may come in– only one of the two GPUs has display outputs, the other is, physically, headless. I was recommended to look into Looking Glass as a possible solution by one of the LinusTechTips members, so here I am to ask questions.

I do not intend to use the hypervisor (likely Unraid or ESXi) as anything except the VM host. All interaction with the computer will be in one of the four VMs, and I guess also ethernet in to the host for management. In the ideal scenario I would be using Looking Glass from within a Windows 10 VM with dual GPU 1 passed through to view the gaming VM that dual-GPU 2 is passedthrough to-- expanded upon below.

From researching Looking Glass, my understanding is it lets you view the graphics from one VM in a window and that you can move your input device between the windowed VM and its host VM/OS. As a hypothetical, if I’m running the 295x2 dual GPU, with two monitors plugged into the card and have set it up to have one gaming VM per GPU all run by the non-graphics-enabled hypervisor (unraid/esxi/whatever), and with a keyboard/mouse passed through to each VM. Would/Could it be as simple as setting the ‘host’ VM to be the display-connected GPU VM, and have its game output be full screen on one of the two monitors, with the ‘guest’ VM (the non-display-connected GPU in the dual card) full screen windowed on the other monitor via Looking Glass? Would the full screen gaming VM have any issues with, say, mouse and keyboard getting confused between VMs? Is it possible to have the Looking Glass Host VM be Windows and not Linux?

Thanks all! Backup idea is to find ways to convert a dual-slot GPU to a single-slot (e.g., water cooling), but if the dual GPU idea can work that sounds like a neat way to do it. Would be substantially cheaper too, given ebay prices on the R9 295x2.

This would be a problem as Windows won’t create a ‘Desktop’ without a monitor attached unless you have a workstation grade AMD card that lets you set manual EDID information on. We normally use a dummy HDMI plug to enable the output for headless systems, thus we still need the video port for this purpose.

That said, there may be a solution coming to this in the form of a indirect display driver for Windows, but it is at a minimum 6 months away from being even started (discovery has been done already though) as my time is limited and current efforts are focused on a rewrite of the Looking Glass client application.

ESXi is not suitable for Looking Glass, the underlying technology must use Qemu which provides the virtual IVSHMEM device.

Darn, back to the drawing board. I didn’t realize that’d be an issue. Thanks!

Another solution that I’m currently using is, using a laptop/nuc/raspberry pi for each user to plug usb devices into and use VirtualHere to do USB/IP. (I haven’t gotten the built in Linux USB/IP to work with Windows client yet).
Downside is if using more than one usb device it’s $50 per license. But it works very well and I notice no lag even when playing shooters like CoD, FarCry, Apex, etc.
I am using the headphone out on monitor through HDMI but could also do cheap DAC.

Another solution would be raspberry pis or cheap laptops and running Parsec or the Nvidia one to stream to each client. With wired gig connections and some setting tweaks it works very well in home. Though I don’t recommend fast action games if going over 1080p.