Looking Glass - Triage

@mathew2214

It is suggested that you create the shared memory file before starting the VM with the appropriate permissions for your system, this only needs to be done once at boot time, for example (this is a sample script only, do not use this without altering it for your requirements):

touch /dev/shm/looking-glass
chown user:kvm /dev/shm/looking-glass
chmod 660 /dev/shm/looking-glass

What does your ls -ld /dev/shm/looking-glass tell you?

And how are you starting qemu? Presumably as yourself?

@mathew2214

Have you read this guide?

Ive bound a hardware mouse to my VM, but the input lag is really noticeable (and annoying). When I switch to the native output from my graphics card there is no such lag, so thereā€™s quite some overhead with Lookingglass itself. Can I expect that to improve in the future?

I gave using NvFBC a shot because I consider my setup barely useable, I know itā€™s not supported, but curiously enough the input lag is even worse using that unless I start a game where it becomes pretty acceptable(?)

-rw-rw---- 1 root root 0 Sep 1 12:10 /dev/shm/looking-glass

i am running qemu as root

thank you. that fixed it right up. now the VM starts but windows 10 gives IRQL NOT LESS OR EQUAL error. but i guess thats a question for a different thread and not related to looking-glass

Do not. I repeat DO NOT disclose which card you used it on. As per Nvidiaā€™s license, Geoff can get in HUGE trouble if NvFBC is used with the ā€œwrongā€ card. It could result in lawsuits on the project.

Input lag is unrelated to capture mode, remove the Tablet pointing device from your VM, LG doesnā€™t support absolute pointing devices.

For consumer cards Itā€™s impossible to do with his code anyway :slightly_smiling_face:

As for the input lag, Iā€™m not using the tablet pointing device or in fact any emulated devices. As Iā€™ve said, Iā€™m passing my HID hardware right through to the VM, not even using spice so LG shouldnā€™t really have an interaction with the mouse. I suspect the input lag is purely related to the delay in the writing and reading of the framebuffer.

Iā€™m not sure if it applies to DXGI aswell but as per the Capture SDK forum, some 80ms+ input lag is supposedly to be expected due to triple buffering, which windows 10ā€™s DWM pretty much enforces now with no way to turn it off so Iā€™m wondering if itā€™s possible to perhaps (make LG) create a fake fullscreen app that allows interaction with the desktop while forcing vsync to be turned off. IF that is infact the culprit for the input lag.

Curious , for the host videocard, do I need to have the same amount of vram as the nvidia guest card?

No, the host card doesnā€™t do any rendering and only needs enough ram to render ā€œpicturesā€ at the desired resolution.

So it seems the fix is easier than I would have imagined. Just passing -o opengl:vsync=0 removed nearly all input lag.
I pretty much have the perfect VM now. So happy. :slightly_smiling_face:

Edit: I guess I cheered too early. While input lag is barely noticeable on the desktop, Game performance is significantly crippled by disabling vsync in LG for some reason. FPS tend to be around 100 (200 as reported by LG) and UPS around 45 at most.

And another edit: Setting -K 60 fixes the crippled performance. Yay.

Thatā€™s awesome. I Just bought an RX560 to try this out. Thanks for the quick reply!

DXGI Desktop Duplication is not part of the old windows capture API, the GPU driver itself handles the capture directly and provides the captured texture to windows to hand off to the application. As such there is no triple buffering, and in some instances we actually get the new frame before it is even sent to the physical screen.

FPS is accurate at 200, there is a hard limiter to prevent it exceeding 200FPS for those that run without vsync. -K changes this hard limit. If lowering this limit increases your UPS, the host CPU was starving for cycles running at this rate.

Youā€™re problem was more likely due to a compositor on Linux. Lowering the hard FPS limit to your actual refresh rate is essentially introducing an artifical vsync without the benefits of vsync. It would be better to find out why vsync is introducing lag.

Turning the compositor off entirely doesnā€™t seem to help either, Iā€™m afraid. Iā€™m unsure what else the reason would be, glxgears for example runs just fine and glxinfo looks as it should.

So I have had Looking Glass working amazingly well for a few weeks now. This morning I awoke to a failure. After reinstalling most of the virtualization software as well as Looking glass I still can not get the Looking-Glass-client to hook into spice.

looking-glass-client -F
[I]               main.c:692  | run                            | Looking Glass (a69630764b)
[I]               main.c:693  | run                            | Locking Method: Atomic
[I]               main.c:686  | try_renderer                   | Using Renderer: OpenGL
[I]               main.c:775  | run                            | Using: OpenGL
[I]              spice.c:159  | spice_connect                  | Remote: 127.0.0.1:5900
[E]              spice.c:757  | spice_read                     | incomplete write
[E]              spice.c:591  | spice_connect_channel          | failed to read SpiceLinkHeader
[E]              spice.c:167  | spice_connect                  | connect main channel failed
[E]               main.c:868  | run                            | Failed to connect to spice server
[dustin@dustin-pc ~]$ 

The windows looking-glass server is running without issue, and the GPU is still passed through. The Linux client is the only thing not working currently.
Any ideas would be very helpful.

Spice hasnā€™t been working for me either as of late, and Iā€™m not sure which package I updated that might be to blame. Try the evdev input method as a workaround perhaps.

I have been dealing with some ā€œperformanceā€ issues when using looking-glass (1440p, I know not really supported) and it was driving me insane. I am running my setup passing through everything, except mouse and keyboard (evdev for these) and expected performance to be good. I am using Q35 as it has been easier for me to dual-boot the VM.

Most games run OK, except, some times, I will see looking-glass using 30% gpu and games just slow to a crawl. Guild Wars 2 would play crappy from time to time. Also, the gpu would, randomly, go to P2 state (600mhz!)

Apparently, under some configurations, if you have the video card attached to a pcie-root-port (the proper way) nvidia driver can just go nuts and downclock the bus from gen3 to gen1 (it always stays at 16x). I have tried different power profiles, regedit hacks and only one thing worked: plug the GPU and the hdaudio straight into the pci-root bus. I have even turned optimal power and the gpu will downclock (and go to gen1 mode) properly and scale back when needed.

With this, Looking Glass is working perfectly fine at 1440p and 60 fps (1080 strix). Maximum is around 5% utilization for the looking-glass-host.

Do not use GPU-Z to verify your current pci-e speeds, the report is usually wrong. Nvidia inspector has been a little better, but I use this to verify:

#Replace 43:00.0 with bus:slot.func used by your passthrough graphics card
sudo lspci -s 43:00.0 -nvv|grep LnkS

The first line output should be something like:

LnkSta:	Speed 8GT/s, Width x16, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

8GT/s = gen3
5GT/s = gen2
2.5GT/s = gen1

My system always boot as G1 (I think thatā€™s normal) but it will change later to the higher speed when gaming (or under prefer maximum performance). Using Aida64 GPGPU test, I also get close to 12000MB/s on memory test, while gen1 gives 3600MB/sā€¦

This is definitely triggered by the driver, considering my nvme and other devices work without problem at 8x, I am not sure if this is a bug, poor implementation on qemuā€™s side or nvidia shenanigans.

If you are having some performance problems with looking-glass, check out your lspci output while gaming, it could be related to this. @gnif, I think this should definitely be checked before confirming performance issues.

P.S.: According to some thread on reddit, i440fx doesnā€™t suffer from this. I wasnā€™t able to test though.

1 Like

If you are having the same issue I was, I found a work around.

For some reason spice has started using port 5901 instead of the default 5900 that Looking Glass is expecting.

Appending the -p 5901 flag while launching looking glass solved my issue.

Now why the port for spice changed I have no idea which is rather frustrating.

1 Like

In my opinion, it might be better to use unix sockets instead of tcp ports if you like spice.

Do you know of any documentation to do this with libvirt?