Return to

Looking Glass - Triage



Suddenly dawned on me when I was reading through for 1000 odd posts that someone said that AUR packages aren’t supported which made me think I wonder if the cilent and host had a version mismatch and low and behold that was the issue.

Now for the fun stuff let’s see what the performance is like.


Yeah, some people have had luck with the AUR package but its definitely a YRMV.

@gnif has said numerous times that the only thing which is officially supported are the tagged releases from github. There are some few who are some kind alpha testers who help test his master builds but that is not recommended.


Downgrading to A11 works fine so the AUR just needs updating but in the mean time I’ll update the ArchWiki with the correct information so at least the next person to run into this will know what to do and saving asking the same question.


One of the things we’re trying to change is making this documentation more obvious. As a new user, please check out my UX thread. I would like to hear your thoughts.


Well if you need any help with the documentation I’d be happy to help with that to give something back to the project.
I think a wiki style might be really useful here as it means users can easily share their solutions and it will always be up to date.


I have a few questions about looking glass. The before having touched it / looking for a gpu (probably) type of questions.

  1. I always thought it copies the vram to the other cards vram. But I´ve read somewhere that it actually uses RAM. Depending on how that works out performance wise I´d prefer that, since it would allow me to more easiely use a “not so great” card for the host and have all the vram of my guest card useable. Also is that configurable? Meaning if I had enough vram can I make it use the vram of the card instead of ram? Or is that only applicable when using an igpu (since it was gonna use RAM regardless).
  2. If I use an AMD card for the guest, I could also still use gsync with looking glass if my host card is an nvidia, right? Might come to that depending on how Navi turns out to be. Might also be better since they don´t restrict virtualization as Nvidia do. So I won´t have to workaround artificial limitations NVIDIA put in place. Kinda s*cks to be softlocked into one brand by the monitor choice.

It´s still gonna take some time till I decide if I even wanna try that to begin with. I´m like eventually-probably-maybe-kinda wanna do it type of decided.


It does, but by means of a system memory copy. Unfortunatly there is no way to transfer directly between the GPUs. As for performance, it’s fast enough to transfer 1000+FPS, however it is limited by the windows capture performance.

This is the idea, you can use an integrated GPU for the host provided it supports OpenGL, which these days everything does.

It depends on what you mean. If you’re using a physical monitor on the guest with GSync, it will work as per normal even with LG running. If you’re trying to make GSync work on the host via Looking Glass, no, it is not possible as the LG client that runs on the host is rendering the captured image using the host’s video card, not the guest’s.


Would looking glass work on somethitg like my msi gs63vr? Its skylake and has a 1060.


I ment the Windows guest uses an AMD card and the Linux host my existing NVIDIA card connected to a gsync display. So this won’t work, because the guest card does all the rendering? But it would work if the guest has an NVIDIA card as well with the display connected to the host? Or is there no way for it to know that there even is a gsync display available and it has to be connected directly to the gpu of the guest either way for gsync to work?


I have a MSI Z170 M5 and a Skylake so I would like to think you are OK. Have you tested IOMMU support out yet?


No I just wanted to know if it would be a possibility… Seems like it is?


@gnif I’ve experienced a couple times when i tried exiting out of looking-glass in full screen that my monitor stays as a black screen and doesn’t revert control back to fedora/linux. Not sure if this is an already known bug, if not i’ll submit a ticket in GitHub with steps to replicate.


Google gives a mixed result to if this will be successful but it does seem a few users on this forum have this board so you should be good for help. I would just set up a pass through VM as you need to do this before you setup looking glass anyway as there is no better test than a live one.

Make sure you have the latest BIOS though as it seems older versions don’t work looking on the MSI forums.


What version of LG? Are you using SPICE? And how are you exiting the looking glass client? ALT+F4? Close button?


Hi there! I’m so thankful that this software exists, and finally means that we as a society can start moving away from having Windows actually installed on a partition to play certain games. (I’m looking at you, Destiny 2 and Resident Evil 7.)

I have a few issues to report, and I’ve seen a few posts about similar things, but I’m just chiming in with my own reports.

  1. EGL renderer on A12 and latest master segfaults the client. I don’t know whether this is a hardware thing or driver thing or what, as my iGPU is an Intel HD 3000, and my dGPU is an Nvidia GT 730.

  2. I can’t compare it to the EGL renderer because of these issues, but the UPS with the OpenGL renderer is incredibly choppy in comparison to what’s actually on display. If I plug my GPU into my TV, I can play games like Fallout 4 and Overwatch at 60+ FPS fluidly, but the instant I connect the client, the FPS on screen drops to closer to 40, and the UPS displayed struggles to stay at anything higher than 15-20FPS.

My specs aren’t amazing as I’m using a laptop from like 2011 with an eGPU setup, but alas, I feel like this still could be important to report.

Host OS: Antergos
Guest OS: Windows 10 Pro
CPU: Intel i5-2520M @ 2.5GHz (turbo frequency up to 3.2GHz), all cores passed through to VM.
RAM: 10GB (8GB passed through to guest)
GPUs: Intel HD 3000 (host) / Nvidia GT 730 (guest)

Let me know if there are any logs you need and I’ll try my best to get them to you.

Thank you so much for all of your hard work, you’re truly changing the future of PC gaming!


This is fixed in the master branch, B1 will include this fix.

Wouldn’t matter, the render doesn’t impact the UPS in any way, which is why UPS is seperate to FPS.

Fallout is known to require running in border-less full screen mode, actual full screen mode causes capture performance issues with the game. Also ensure that vsync is ON in your games to prevent the GPU starving LG of GPU time. I detailed this a few posts up, please go back and read it.

Indeed, your setup is extremely sub-optimal and well below recommended and tested specs. Also since your using a eGPU you are very likely only getting a slow link to the card, this can impact LG performance enormously as discovered today with a user on Discord.

Thanks, you’re welcome



I have a working Windows 10 virtual machine with GPU passthrough (no error 43 or else, I can run OpenGL/DirectX applications fine).

However, when I try to start looking-glass-host.exe it fails at DXGI.cpp:293 | Capture::DXGI::Initialize | Failed to create D3D11 device: 0x887a0004.

The error message is in French (I installed Windows in French, and tried changing the language but it won’t work), so I translated the error message: “The peripheral interface or required feature level isn’t supported on this system”.

I tried a12 and a11 but it won’t work. I’m currently compiling from source to see if I can get it to work; Any help would be appreciated.


Just compiled the latest looking-glass version (from the github), still get the same error. Apparently someone had the same problem, but it was related to Code 43.


Ok so I digged some more and I found the reason why it won’t initialize : I use a mobile GTX1050, the GP107M. It’s an optimus muxless card, which means it doesn’t have a screen output, so DXGI won’t work sadly. Would be nice to have this in the FAQ or somewhere else, so laptop users don’t get false expectations.


Thanks, we’ll see about adding that to the documentation.