How to compile looking glass client on Debian?

Hi

I am trying to setup Looking Glass on my Debian host.
Are there any ready made packages, because I can’t find the dependencies for compiling it. Only other distros are mentioned.

I found the looking-glass-client but I don’t know what I have to do to start it.

apt install looking-glass-client

If I search for Debian looking glass client
I get no tutorials and there is a version mismatch between the most recent stable B5.0.1 or is it already the most recent in Debian 11?

I found some package dependencies in another thread in this forum:

apt update
apt install build-essential libxkbcommon-dev libxcursor-dev libxpresent-dev

I’ll see if I can get it running with this tutorial:

Thanks for helpful answers.

PS: I am running the VM as a user session.

https://looking-glass.io/docs/B5.0.1/install/

A maintainer packaged it for Debian against our request and now has a very old stale version that is useless.

Please don’t! Use the official installation documentation.

These guides/tutorials often overlook or recommend things that are completely wrong, or are specific to their hardware.

1 Like

I extracted the files for B5.0.1 to:

~/Downloads/looking-glass-B5.0.1

There I typed:

cmake -DCMAKE_INSTALL_PREFIX=~/.local .. && make install

and got an error:

CMake Error: The source directory "/home/username/Downloads" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.

You didn’t follow the instructions…

https://looking-glass.io/docs/B5.0.1/build/#client-building

I really didn’t.
Do you already know all of the possible error codes distros spit out at that point?
Or was it cmake -DCMAKE_INSTALL_PREFIX=~/.local .. && make install ?

-DENABLE_LIBDECOR=ON

Why did you choose to prepend LIBDECOR with DENABLE or even ENABLE?
Isn’t that redundant and what does the D stand for?

I actually found an error on this page:
https://looking-glass.io/docs/B5.0.1/install/

### Determining Memory

You will need to adjust the memory size to be suitable for your desired maximum resolution, with the following formula:

"width x height x 4 x 2 = total bytes"

It should be:
“(width x height x 4) / 8 = total bytes”
The 4 is for RGB+Z-buffer but why times 2?
For 2 frames / bitmaps?
Bits only become bytes when you divide by 8,typo?
If you know what a Mebibyte is you probably don’t want to read the following articles:

If you remove the simple you get redirected to the article for byte and the corresponding chapter:

I am using the customary convention too sometimes but in articles ambiguity can be a problem so I advise against it.
No offence meant with the simple version it was just the first thing that the search engine spat out and the only article that was selfcontained and not part of another.

Seriously? Your post above clearly shows you extracted the tarball in your Downloads folder, then probably cd into it, then ran cmake there. Cmake complained of not being able to find a CMakeLists.txt in Downloads. If you would have followed the instructions as gnif suggested, you would have seen that you need to run cmake in a build folder you create under the client folder. You don’t need to “know all of the possible error codes” to be able to figure out what cmake is telling you in plain english.

No one “prepends” anything. ENABLE_LIBDECOR is the LG project setting for cmake to include the proper libraries to build the Looking-Glass client with support for Wayland libdecor. It’s just a single piece of text that is defined in the client’s CMakeLists.txt and is OFF by default. And again, if you would have read the documentation you could have found details about it. The -D in cmake invocations is a way to update a predefined project setting when cmake is building the project cache. Read the cmake man page to learn more.

Wrong. No one is converting bits to bytes here. Each pixel is 4 bytes, not bits. Though the proper unit for this should be MiB as you indicated, most people refer to binary exponent type when discussing storage and memory space (while manufacturers obviously try to stick to the decimal type exponent to save resources). LG documentation is written for our users ease of use and many of them will be confused if we start using the IEC style unit names.

3 Likes

-D is a cmake argument that stands for DEFINE:facepalm:

Each pixel is represented as four bytes, or, 32-bits… each for Red, Green, Blue and Alpha respectively. We also need two frames in shared memory, one for writing to and the other for the client to read from… so the math is indeed correct…

width * heigth * 4 * 2 = total bytes not including some of the overheads which is why we tell people to addon an extra ~10MB to cover this.

Then you likely want to read the JDEC standard as IEC units do not apply to RAM.

Unit prefixes for semiconductor storage capacity

The specification contains definitions of the commonly used prefixes kilo, mega, and giga usually combined with the units byte and bit to designate multiples of the units.

The specification cites three prefixes as follows:

  • kilo ( K ): A multiplier equal to 1024 (210).
  • mega ( M ): A multiplier equal to 1,048,576 (220 or K2, where K = 1024).
  • giga ( G ): A multiplier equal to 1,073,741,824 (230 or K3, where K = 1024).

the JEDEC specification does not explicitly include the IEC prefixes in the list of general terms and definitions.

As for reading the documentation, no you did not read it… it’s VERY clear that you need to make a directory and run cmake from inside it pointing back to the source tree.

If you’ve downloaded the source code as a zip file, simply unzip and cd into the new directory.

cd client/build
cmake ../
make

This error tells me you did not cd into the build directory as per the instructions.

As for building on Debian, this is EXACTLY what the documentation targets as it’s what I wrote and still today continue to write LG on.

No, but I know exactly what your error is caused by and you got it because you didn’t follow the instructions.

If you still don’t want to read, I even made an extensive video on how to do this (although it does need a refresh due to the move away from the wiki for official documentation):

3 Likes

There is still a problem with the math:
[width * height * 8 (Bits per channel/ color) * 4 (RGB+Zbuffer / Alpha) * 2 (frames) ] / 8 / 1024 / 1024 = <16MiB

I got LG running as well as sound and I fixed an issue with the PS2 input method by switching to USB.

I will document everything in my tutorial and include the sources / links to the documentation for myself and others.

Thanks for the help.

There is absolutely nothing wrong with the calculation… You do realise you’re talking to the inventor/developer of LG here?

You have a multiplication of 8, then you divide by 8 undoing the prior multiplication (1 * 8 / 8 = 1). There is NO need to work in bits at all here, it just confuses things.

Width * Height = Number of pixels in total on the screen for ONE frame
* 4 = number of BYTES each pixel will consume in memory, where a BYTE is 8 bits
* 2 = two frames
/ 1048576 = 1024 * 1024 = 1MB

Width * Height * 4 / 2 = BYTES / 1048576 = MB

As for your <16MiB, this makes no sense as you didn’t provide a width & height value.

If we had such a fundamental issue with our maths here the application would not work at all as these calculations are used in the application also to prevent buffer overflows for when people accidentally set it too low.

Edit: Further proof that your calculation is just adding complexity where none is needed, assuming 1080p (1920x1080)

(1920 * 1080 * 8 * 4 * 2) / 8 / 1024 / 1024 = 15.8203125
1920 * 1080 * 4 * 2 / 1048576               = 15.8203125

If you wanted to simply it further you could even do this

1920 * 1080 * 8 / 1048576 = 15.8203125

Because 4 * 2 = 8, not because we are using bits.

After adding 10MB for overheads and additional data we need to transfer it becomes 15.8203125 + 10 = 25.8203125 which then needs to be rounded up to the next power of two which is 32MB.

5 Likes

Please don’t, we are in the process of making official guides and these third party guides are usually left to get out of date confusing the public causing all sorts of support requests wasting our time.

There are also many performance tweaks and improvements that can be made that people like yourself are usually unaware of being so new to LG. Ie, setting up the kvmfr module and configuring to use DMABUF if your specific hardware supports such features. The importance of CPU pinning and not over-allocating to your guest. The limits of iGPU performance due to bandwidth limitations, etc. UX tweaks for KDE vs I3 vs Gnome… performance tweaks specific to Wayland vs X11. The advantages of jitRender and when they are applicable and how to use it. How to get the lowest latency while avoiding tearing and enabling vsync. This list goes on.

There are TONS of edge cases that need to be covered in any guide produced to give the end user the best experience.

Also this is wrong, you should be using virtio-mouse as per the documentation. It’s faster, allows for 5 button mice and sufferes none of the performance/latency drawbacks of emulated USB hardware or the limitations of PS/2.

Assuming you mean the new audio feature currently in LG Bleeding Edge? If you’re documenting another method, such as using SCREAM or qemu’s audiodev output with env vars, etc… know it’s already deprecated.

2 Likes