Looking Glass - Triage

What kind of user account are you using to log in, standard or admin? And the user account you’re using to run the new task, are they different users?

Make sure to check Run with highest privileges as LG requires that now supposedly (I never tried without).

Other than that it’s just:

  • General -> Run with highest privileges
  • Triggers -> At logon (for your user)
  • Actions -> looking-glass-host.exe
1 Like

It does, stupidly microsoft don’t let us bump up our GPU priority without it.

Is it safe to asume the framrate of a displayport/hdmi dummy is irrelevant for looking-glass?
Asuming i want to experience higher framerates and my host gpu and monitor is capable.

Ty

For LG yes, it’s irrelevant, but for windows it’s not…

LG will take frames as fast as windows will render them, so if you turn off vsync and get 200FPS, LG will try to send 200FPS to the client. However if you turn off vsync you also starve the host application for GPU time to perform the capture, it’s a trade off.

The best scenario is if you can turn off vsync but frame rate limit the game as some more modern titles let you do. And for those that don’t you can achieve the same thing by using RTSS.

1 Like

FYI:

1 Like

I looked into the RPCS3 project and found out they were doing Appimage builds. Sounds like a far better concept than Snap or Flatpak. Any chance Looking Glass’ client will get Appimage builds?

This would mean the host application is a pre-compiled EXE, and the client is a pre-compiled Appimage. This would remove the need for make on many users and I’m hoping at some point in the Beta cycle this is possible.

Also, a Qt based GUI config for the client would be nice if people are using an Appimage so they don’t necessarily need to use the terminal.

It’s customary to use make when dealing with development software, or software not yet having a stable release.

That said, once a proper stable release exists, distribution can exist in three shapes.

  1. Snaps / Flats / Appimages - The worst of the three. While decent for end user software, these have pretty much no interaction with the system settings, do not use shared libraries, et cetera. Exceptions to the rule exist of course, but are not common.

  2. As a prepackaged .deb or .rpm provided by the software developer. This often only targets one or two supported distributions while everyone else has a lottery on their hands. Often a PPA or third party repository exists which means access to automatic updates. It’s an o.k. solution for less popular packages.

  3. As source. This is great, because if the software is popular enough (as in this case), most distributions will adopt and welcome the package into their repositories, provided the license allows it. This means more collaboration et cetera.

I have no idea how Gnif wants to handle packaging, if he even has considered it so far. Perhaps he considers it a “I’ll deal with it once it’s stable” issue.

Huh, does this affect the APU chipsets too, or just the standalone Vega cards (56 / 64 / VII)?

I am the developer not the package maintainer, that said I have already worked with some debian, arch and gentoo maintainers that want to package LG and I have done everything required to make it possible both from a technical level and from a license level.

Edit: in fact, see https://packages.debian.org/sid/looking-glass-client

I have asked the maintainers however to not release an official package until LG reaches Beta 1.0

Unknown, need people to test to find out.

1 Like

That is awesome! :slight_smile: While I will probably build from source directly once I get access to my LG machine, if Ubuntu 20.04 could have 1.0 as a package per default it will be really easy to support say, my aging old man.

Right, I can’t do much for now but my 2400G + RX 580 will be available to run tests once I get it built in Q4.

3 posts were split to a new topic: Snap and Flatpak vs traditional package management

Curious on performance impact.

When directly attach to a monitor without running looking glass server, the cinebench r15 score inside my VM is ~720 (3 core, 6 thread for the VM)

Currently trying B1-rc6 (with NVFBC), VM scores ~620 in cinebench r15 when the looking glass server is running (with a client attached, egl renderer with vsync on but multisample & double buffer off)

Anyone wants to share performance comparison before and after looking glass running?

This has been noted and discussed on these forums many times, there is 100% a performance impact due to running an additional process that also uses your GPU.

Cinebench is purely CPU benchmark, I just wanna know the general performance impact on CPU side.

It looks like you lose 15% of CPU performance when looking glass is running.

So, did you ever figure out your issues?
And did you ever do any more testing with frame rate limiting?
I like using AMDs Chill myself on my RX 480 (and my freesync monitor).
Not really doing looking glass and Linux right now though…

With various updates to looking glass that improved performance it got better over time, I also swapped to a Vega 64 for waterblock compatibility but doubt that made any difference to anything. For some titles yes I use RTSS to limit the frame rate and depending on the performance doing that I also enable scanline sync. There are times when playing WoW where my in-game frame rate is high but my UPS will tank, it’s usually when there are a lot of particle effects basically, but otherwise I actually usually play with LG instead of natively on the monitor, though that is still definitely smoother.

I am of course playing at quite a high resolution so either way it wasn’t going to be perfect, but it’s very very good.

when trying to compile

imre@glass:~/LookingGlass/client$ make
[ 10%] Built target lg_common
[ 15%] Built target decoders
[ 20%] Built target spice
[ 24%] Built target font_SDL
[ 27%] Built target fonts
[ 29%] Building C object renderers/EGL/CMakeFiles/renderer_EGL.dir/shader.c.o
In file included from /home/imre/LookingGlass/client/renderers/EGL/shader.c:20:
/home/imre/LookingGlass/client/renderers/EGL/shader.h:32:71: error: unknown type name ‘size_t’
bool egl_shader_compile(EGL_Shader * model, const char * vertex_code, size_t vertex_size, const char * fragment_code, size_t fragment_size);
^~~~~~
compilation terminated due to -Wfatal-errors.
make[2]: *** [renderers/EGL/CMakeFiles/renderer_EGL.dir/build.make:204: renderers/EGL/CMakeFiles/renderer_EGL.dir/shader.c.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:314: renderers/EGL/CMakeFiles/renderer_EGL.dir/all] Error 2
make: *** [Makefile:130: all] Error 2

Got a bit of an API vs Capture question.

I know DX10, DX11, OpenGL and DX12 have high speed hooks for DXGI, but how does Looking Glass compare for DX9 games? Goat Simulator, A Hat in Time are a few DX9 Unreal Engine 3 games that tap the DX9 API to it’s limit. Would this affect the DXGI capture at 4K?

It looks like u r inside the client directory not the build directory. I dont know if that would cause that compile error though. I would do a fresh unpack of the zip file and the follow the instructions online. Make the build directory inside of the client directory. cd into that directory and run cmake …/ then make.

1 Like