Looking Glass - Triage

Just my experience with using LG currently and remaining gripes.
I guess the GPU disconnecting the monitor/stopping video output after a while can’t be helped (usually a few hours). Getting either a black screen or a corrupted image of previous video output till I switch monitor input back to passed through GPU and waiting a few seconds.
When I’m unable to switch monitor input, e.g. because I’m accessing the machine remotely I can’t use LG anymore and have to fall back to something like Windows’ remote desktop.
After using Remote desktop and switching back to LG, after I was able to change monitor input or turn monitor back on, I notice that the in-game resolution is locked at something low which I can’t increase anymore.
Either way, a12 seems quite a bit better, and neither host application or client needs to be restarted anymore after switching games or changing resolutions. So nice progress so far @gnif and thanks for your continued hard work on the software!

1 Like

Hey I’m trying to get looking glass to work on Fedora 29. I went through the quickstart using a12 but my client is always having a black screen. Below is the output.

[I]               main.c:757  | run                            | Looking Glass ()
[I]               main.c:758  | run                            | Locking Method: Atomic
[I]               main.c:773  | run                            | Wayland detected
[I]               main.c:779  | run                            | SDL_VIDEODRIVER has been set to wayland
[I]               main.c:806  | run                            | Trying forced renderer
[I]               main.c:751  | try_renderer                   | Using Renderer: OpenGL
[I]               main.c:961  | run                            | Waiting for host to signal it's ready...
[I]             opengl.c:469  | opengl_render_startup          | Vendor  : X.Org
[I]             opengl.c:470  | opengl_render_startup          | Renderer: AMD Radeon (TM) RX 460 Graphics (POLARIS11, DRM 3.27.0, 4.19.10-300.fc29.x86_64, LLVM 7.0.0)
[I]             opengl.c:471  | opengl_render_startup          | Version : 4.4 (Compatibility Profile) Mesa 18.2.6
[I]             opengl.c:483  | opengl_render_startup          | Using GL_AMD_pinned_memory
[I]               main.c:970  | run                            | Host ready, starting session
[I]             opengl.c:896  | configure                      | Using decoder: NULL
[E]             opengl.c:863  | _check_gl_error                | 955: glGenBuffers = 1281 (invalid value)
[I]             opengl.c:896  | configure                      | Using decoder: NULL

Couple of things worry me. First the client does’t show the version. Second i need to specify opengl or else it uses EGL and gives me a segfault.

Hello. I made a discovery that might be of use to other people :

I had performance problem with the A12. Compared to the A11, I was losing 300 points in Cinebench and struggled to reach a locked 60 FPS in WoW. Assassin’s Creed Odyssey had loadings in the middle of horse ride, something that never happened before. I tried to find a solution and tested a lot of things but I could not find what the problem was.

A few minutes ago I had an intuition and tried to launch the looking-glass client with the “-K 60” parameter to limit the fps in the client to 60…and it worked ! Cinebench got back from 700 to 1000, WoW managed to get 100~ unlocked fps and Assassin’s Creed stopped struggling. I don’t know what happened, maybe it was related to my SDL problem ?

I’d like to update that the glGenBuffers bug does not appear when building from the latest commit on master. Only from the a12 release. I’m sure other people has run into this bug, Gnif could you update the a12 release source code?

First the client does’t show the version.

This is nothing to worry about, it tries to get the build version from the git repository, but if you downloaded a tagged or master.zip from GitHub it is unable to determine the repository version.

Trying forced renderer

EGL should be in use, can you please run the client under a debugger (gdb) and provide the output from gdb when it faults?

Gnif could you update the a12 release source code?

No, A12 is static for testing, you will need to either run master, or wait for B1.

A few minutes ago I had an intuition and tried to launch the looking-glass client with the “-K 60” parameter to limit the fps in the client to 60

What was your limit before?, there is always a limit and the default is 200FPS. The only way this would affect things is if you had this value way too high and the game/application was not limiting it’s frame rate (vsync disabled).

While it’s common to disable vsync in your game/application to improve latency, when using looking glass it is more then useless, it hampers performance AND latency.

DXGI Desktop Duplication obtains the frame without waiting on the vertical sync and if vsync is disabled in the client you will get the same effect as disabling vsync in the game. By turning off vsync in your game/application you will force it to render frames without any FPS limit, leaving no GPU time for other things, like capture.

If you wish to force a specific FPS limit in windows you can create a custom resolution in the NVidia control panel of the desired resolution, and leave vsync enabled in your game/app. This will impose a FPS limit, but will not impose latency provided you turn vsync off in the LG client.

Please note: Some monitors, especially the older CRT monitors can be damaged by running custom crazy refresh rates. In today’s world of digital displays I doubt this is an issue anymore, but it must be said just in case. So if you’re worried do not attach a physical monitor but instead use a dummy display device.

Hello, I have several questions.

Using LookingGlass with Game Mode
as far as I know Game Mode feature in WIndows 10 will allocate as much system resources to game, so it means that LookingGlass will run with less CPU cycles? So it means that it doesn’t have to copy every frame (because of lag)?

I don’t know if you understand me. But if you do something CPU intensive, then OS will allocate less CPU cycles to other applications (I mean OS scheduler), so can be LookingGlass performance affected by this?

Is Wayland supported?
I would like to ask you if LookingGlass supports Wayland or should I use X org? Or is there any difference?

CPU performance hit
I was asking some questions at https://forum.level1techs.com/t/questions-about-looking-glass/127078/6

And I was interested in CPU / RAM Performance hit too (I have i5 4670, so I don’t know what is CPU load for my system) and one man answered me that:

Expect to be able to dedicate one core to looking glass.

Is it true that LookingGlass is too CPU hungry that I whould dedicate whole CPU core only for frame coping?

And he replied this too:

I’ve not experienced much more than 30% utilization between the Windows Agent and the Linux “Server”, but some people have experienced more.

In my opinion 30% CPU load for frame coping is too much, so there goes my suggestion.

As far as I know, GPU doesn’t have direct access into RAM, but iGPU does, and let’s be honest, most people use iGPU on host machine, so what if reading and displaying frame from shared memory would do iGPU insted of CPU? Maybe it could decrease CPU load.

Thanks for the info !

What was your limit before?, there is always a limit and the default is 200FPS.

I guess it was 200 fps then, I was not using the -K parameter, I discovered it with the help command and that gave me the idea.

Do you mean that the default 200 is crazy and people have to use the -K parameter with a more sensible value ? If that is the case, you might want to add it somewhere in the documentation if that is not already done.

I think there might be something fishy with my setup, because I was already using the in-game vsync. It was really “-K 60” that gave me back the performance I had in a11. I might be mistaken but I am not sure that capture problem would explain the lower score in the Cinebench CPU test.

I am glad that limiting FPS in the client made everything normal, but I am wondering why I am the only person that got these issue :slight_smile:

You would have to ask Microsoft. However DXGI already only provides frames that have changed, so if nothing is going on, the capture rate will be very low.

Yes, if you run the host application with admin privileges it will try to escalate it’s priority to realtime to try to avoid this however.

Not officially but patches have gone in that should make things work with Wayland.

No, whoever provided this information has something wrong with their system or they are running a very old build of Looking Glass.

Again, based on old versions of LG, remember this is Alpha software under heavy development.

No, 200 is a sane value but hardware varies as does your requirements, which is why it’s a configurable option. A11 had the same 200FPS limit, nothing has changed in the client application that could affect the VM’s performance.

However the host application has had some tuning performed, and capture in general under windows using any software incurs a performance penalty no matter what you do.

About the only thing that would affect Cinebench would be CPU load as it’s not done on the GPU, it’s a CPU test. Have you dedicated cores to the windows VM or are you sharing them with the host applications? If LG at 200FPS was running too fast for your core it could have been hampering the performance of your guest in general.

What is your CPU?

I have a 9700K @ 5GHZ, I gave 6 cores to the VM and kept 2 for my host. Before finding the -K parameter I tried with 4 cores, and I also tried isolating cores completely for the VM but there was no differences unfortunately.

I also tried disabling the overclocking and going back to stock value, just in case it was some instability, but it also did nothing.

Speaking of resources usage, according to the Windows 10 task manager, the looking-glass host is sometime using up to 18% of my 2080 TI, is that a reliable information ? It seems to be a lot but maybe it is just using something that has no real-world impact on gaming.

Speaking of resources usage, according to the Windows 10 task manager, the looking-glass host is sometime using up to 18% of my 2080 TI, is that a reliable information ? It seems to be a lot but maybe it is just using something that has no real-world impact on gaming.

Looking Glass should be CPU intensive, not GPU intensive as far as I know.

What is your CPU Looking Glass usage on host and client in % during gaming?

This is not correct, Looking Glass is extremely lean on CPU and GPU resources.

Thanks and what is typical CPU load of client and host? I have i5 4670 4C/4T @ 3.8GHz, I would like to know what to expect.

So Looking Glass everytime checks current frame with previous frame, if changed? If they are same, then it is not copying?

It will work however your CPU is sub-optimal for any form of high performance virtualisation pass through in the first place. While LG is lean, you simply don’t have enough cores for a good experience. You really need 8 cores at minimum with which you can dedicate 4 cores to the VM.

This is all very experimental, the only way you are going to know what to expect outside of the information and demonstration videos already provided is to try it out yourself.

No, LG does not check, the Windows capture API only provides a frame if there has been a change. If there is no change, then there is no copy as there is nothing to update.

But I can assign all cores into VM, I KNOW there is MANY people which says IT IS NOT RECOMMENDED, but I read a lot of testings and questions on stack exchange and it is not true, that you shouldn’t assign all cores into VM, for example here https://unix.stackexchange.com/questions/325932/virtualbox-is-it-a-bad-idea-to-assign-more-virtual-cpu-cores-than-number-of-phy

And saying that VM can consume all CPU power is bullshit too, because host is “master”, host’s OS scheduler assign CPU time to all processes and VM.

The problem is not that. It’s task switching and the system being starved for cpu resources

If you’re pinning all 4 cores, you don’t have enough cpu power to do everything you’re asking it to do, so performance will take a hit.

Also, the guest vm can utilize all the cpu resources of the host if all cores are assigned.

Look at the priority on your vm sometime.

1 Like

But if on host machine there will be nothing running except one VM running with 1 game, I don’t think there will be any performance hit.

It entirely depends on the game. FTL, for example, won’t be a problem, but battlefield will.

My recommendation is to assign 3 cores to your vm. Leave one core for your host os.

1 Like

I don’t play Battlefield, I play games like WarThunder, Fallout 76, EVE Online, City Skylines.

Yes I can assign 3 cores and then 4 and test it, and then I will see FPS, Input lag, Looking Glass FPS.

If I get more than 60 FPS, It will be enough (Most games are GPU intensive, except games like City Skylines).

So I’m not really sure what you’re asking here.

Fallout 76 will pin 4 cores easily. Not sure about warthunder. Eve isn’t super intensive on anything, so I wouldn’t be concerned about it. Cities skylines is the real concern. Are you unable to play it on Linux?