Return to Level1Techs.com

Looking Glass - Guides, Help and Support

iommu
lookingglass

#760

Because with vsync off your client is trying to render as fast as possible even if there is nothing new to render which pegs your CPU and/or GPU at 100%, impacting performance of other things, such as your VM, and starving threads for cursor updates, etc. A11 includes a FPS limiter to combat this for those that insist on running with vsync off.

Please note that VSync doesn’t exist just to reduce screen tearing, it’s main feature is to prevent your system going nuts trying to do extra work for no reason. If you must run without vsync, anything greater then 2x the native refresh rate of your screen is useless, wastes power, generates excess heat, and impacts performance in other areas.

In short, VSync is a good thing and before people turn it off they should learn what it is for, what it does, and why. Once you are rendering at the refresh rate of your monitor, anything more is just useless.


#761

I mentioned that DOOM (a FPS game) in my VM can only run at around 65 fps, but can be around 120 fps in physical windows installation. I played with client vsync option just to see if this maybe the cause. Looks like it is not then :wink:


#762

sounds like you might be PCIe speed limited. what type of config do you have? is the card running at [email protected], what resolution are you testing it against, etc


#763

It is a MXM form factor GPU in my laptop, should run at x16 physical interface.

I’m passing GPU with
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1
-device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,x-pci-sub-device-id=6065,x-pci-sub-vendor-id=4136,multifunction=on

Linux host and Windows VM are all running at 1080p 120hz display.

Should I check the PCI speed inside VM?


#764

Alpha 11 has been released:


#765

Awesome !
Thank you for all the improvements.


#766

Fantastic!

I will be testing it very soon


#767

Just a note to anyone else experiencing the windows host program crashing at launch with no error output. Be sure you have configured the CPU in qemu/libvirt as the host program requires SSE support. The default “QEMU Virtual CPU” does not expose the support to the guest and as such the host program will fail to operate. It is recommended to set the CPU to “host-passthrough” to expose all the features of your CPU to the guest so it can take full advantage of them.


#768

Having a few issues with getting Spice to connect to my Windows 10 VM (looking glass client is running fine, but i get an error unless I disable spice (-s option)

Host OS: Arch Linux
CPU: 7600k (upgrading to a 8700k on Wednesday)
RAM: 16GB ( 4GB given to the VM )
Host GPU: iGPU
Guest GPU: GTX 1070 (VFIO passthrough)
Montior: 2560x1400p 144hz

Guide Followed

Windows 10 VM xml

Error:
$ looking-glass-client -F
[I] main.c:702 | run | Looking Glass (a11-14-g9e02131525)
[I] main.c:703 | run | Locking Method: Atomic
[I] main.c:696 | try_renderer | Using Renderer: OpenGL
[I] main.c:790 | run | Using: OpenGL
[I] spice.c:159 | spice_connect | Remote: 127.0.0.1:5900
[E] spice.c:562 | spice_connect_channel | socket connect failure
[E] spice.c:167 | spice_connect | connect main channel failed
[E] main.c:886 | run | Failed to connect to spice server


#769

If you’re using libvirt you could add a spice channel through the UI. I’d also recommend trying to do evdev passthrough, I like it better when it’s about gaming.


#770

what kind of spice channel through virt-manager?
Also, have a guide on how to do evdev?

Got it figured out, thanks.

Followed this guide


#771

So,

I’m running into some decreased gaming performance in CS:GO and R6: Siege when going from A10 to A11, around 10-15% less UPS and a noticeably choppier experience. A11 never reaches full UPS on on desktop (at 120Hz it only goes up to 100~110 UPS, A10 was capable of solid 120 UPS while having 120Hz and 120FPS, a much smoother experience).

I didn’t do any rigorous benchmarking but I did test out a very large portion of different settings (V-sync, Spice, preventBuffer, mipmap, fullscreen, different compositors, different -K values) and honestly there was no defining changes (preventBuffer=0 did not change anything, maybe one UPS or two).

I made sure no cores (not in guest or host) was at 100% utilization and the GPU was around 100%~ in both versions of LG while gaming.

Any ideas how to try to take the performance to the next level?

Setup:
GTX 970 guest
6700K (6 cores on guest, 2 on host)
DDR4 at 3000MHz at average timings
HD530 and GTX 770 for host


#772

Sorry to hear that, it is evolving software so some changes can cause things to go backwards sometimes. I will have a dig around and see if I can note what may have changed, but AFAIK nothing should have caused a slowdown.

What resolution are you running? and, please post the output of the client


#773

On Fedora, you will now need 2 additional packages to build the client:

libconfig-devel and nettle-devel

Not having these packages will result in failed builds.


#774

This was documented both on the website and in the release information.


#775

Running into a new problem. Fedora 27 only provides libconfig.so.9, and not libconfig.so.11 that the client is requesting.

Edit: NVM, was my own fault for building libconfig from scratch…


#776

1080p

./looking-glass-client-a11 -o opengl:vsync=0 -kMFsK 120
[I]               main.c:692  | run                            | Looking Glass ()
[I]               main.c:693  | run                            | Locking Method: Atomic
[I]               main.c:686  | try_renderer                   | Using Renderer: OpenGL
[I]               main.c:775  | run                            | Using: OpenGL
[I]               main.c:901  | run                            | Waiting for host to signal it's ready...
[I]             opengl.c:552  | pre_configure                  | Vendor  : Intel Open Source Technology Center
[I]             opengl.c:553  | pre_configure                  | Renderer: Mesa DRI Intel(R) HD Graphics 530 (Skylake GT2) 
[I]             opengl.c:554  | pre_configure                  | Version : 3.0 Mesa 17.2.8
[I]               main.c:921  | run                            | Host ready, starting session
[I]               main.c:177  | updatePositionInfo             | client 1920x1080, guest 1920x1080, target 1920x1080, scaleX: 1.00, scaleY 1.00
[I]             opengl.c:602  | configure                      | Using decoder: NULL

Yes, I’m aware this is very much an alpha product and I’m very grateful the current version is working as well as it is :slight_smile:


#777

Just tried a11 host, but can only see DXGI support, though I use NvFBC in a10.


#778

NvFBC has been disabled until beta release, it is too much work to maintain two APIs while the code is still in flux.


#779

Liking that some games with light CPU load in low complexity scenes now actually reach 4K 53-60p on my potato X79 system with a Sandy Bridge Xeon. High complexity scenes still dip down to 4K at 38fps, so overall bandwidth on Xeon is still an issue. I’d have to upgrade to Threadripper 2 in order to reap the benefits of Looking Glass at 4K with DDR4-3466 quad channel guaranteed.