There are several messages for “not big enough” which one exactly are you getting?
Looking Glass - Guides, Help and Support
Sorry, I tried to reproduce the situation I had before and this time I had no such error.
Hey There
First off just want to say thank you so much gnif for providing this amazing work on looking glass!
Been a long time VFIO passthrough user and decided to give looking glass a shot, previously my install was using a linux host to passthrough my main GPU to various different virtual machines, Windows, Linux, Mac OS X, FreeBSD, Illumos and Androidx86.
Normally I’d just switch different inputs or use the Picture in Picture mode on my monitor but looking glass makes the input switching alot easier now.
My build is almost 6 years old now but still goes strong and works perfectly fine (if anyone has an older platform similar, I can confirm Sandy Bridge E works):
CPU: i7 3960X
Memory: 32GB DDR3 1866MHz
Motherboard: Asrock X79 Extreme 9
GPU Host: GTX 960
GPU Passthrough: Titan X
Gotta couple of questions: (and sorry if getting support for certain operating systems is bit sketchy haha),
Are there any plans for support of looking glass drivers working on other guest machines? Linux, FreeBSD, Mac OS X (might be a stretch) etc? I would be certainly more than happy to help get working
I think I might be doing something a little bit dumb here but I’m trying to get the looking glass host program to autostart for login? I’m getting an error in command prompt:
"Unable to configure a capture device
An error occurred, re-run in foreground mode (-f) for more information
Press enter to terminate…"
Again amazing work, looking forward to using looking glass!
DXGI isn’t allowed on login-screen and secure desktop screens like confirmation for admin execution of a program.
I would enable auto-login (bypassing password) and run the host application on login.
Ah righto, will have to investigate something to auto login on this machine as I do connect remotely to my works domain controller over a VPN. Or else just keep switching monitor inputs/use PIP mode for login if that’s too insecure.
Also not sure if this is covered already but stuff like UAC prompts don’t come up? Not sure if there’s a fix or just disable it?
I am having a bit of tearing/choppy issues, its not too bad but just wondering if its maybe my hardware or something incorrectly configured. Was using unigine valley benchmark to test and noticed a bit of stutter. Maybe something on the host isn’t right?
Cheers,
They are on the Secure Desktop, please read though: https://looking-glass.hostfission.com/node/7
Working on it… turn off anti-aliasing, it doesn’t play nice with DXGI
Ah is that what’s causing it, I played around and enabled MFAA in the NVidia settings and that seems to help massively with stutter on anti-aliasing
EDIT: It’s practically perfect now, I’m guessing this for Maxwell and newer GPUs
Is that anti-aliasing issue specific to the Unigine Valley benchmark, or is it generally a problem with all games? Asking because anti-aliasing is pretty important.
I noticed stuttering issues with AA on in Overwatch and D2, so I’d venture to guess that it’s a general issue.
Unfortunatly its across the board, hardware AA such as FXAA is fine, but old school super sampling AA seems to cause quite a substantial slowdown.
This bottleneck seems to be in either the driver or the Windows DXGI DD API, there is little we can do about it.
I dunno I would rather just respect that a game is a computer game and look at the jaggies rather than think my eyes are failing slowly due to lack of sharpness on objects and much worse performance. AMD cards have never been good at applying AA anyway.
AA in BF1 looks like utter trash.
By the way. you can add the shared memory device to a libvirt domain natively without qemu args like this:
<shmem name='looking-glass'>
<model type='ivshmem-plain'/>
<size unit='M'>32</size>
<address type='pci' domain='0x0000' bus='0x0a' slot='0x02' function='0x0'/>
</shmem>
Maybe you can include that in the documentation, it might be easier for some people. It translates to the following qemu args:
-object memory-backend-file,id=shmmem-shmem0,mem-path=/dev/shm/looking-glass,size=33554432,share=yes
-device ivshmem-plain,id=shmem0,memdev=shmmem-shmem0,bus=pci.10,addr=0x2```
for the problem of no display detection in the windows guest causing the resolution to default to 1024x768, it looks like a couple registry tweaks should work as an alternative to having to use a dummy connector.
i did some googling and poking around in the registry in windows 10. and it looks like HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\GraphicsDrivers\Configuration\ should have the relevant keys for display resolution settings for the various video out configurations windows has detected.
for me, \NOEDID_1A03_2000_0000000B_00000000_0^3B055267EA1D3037447EEC960EDDCAE3\00 looks to be the sub key for the simulated safe resolution display for when windows doesnt detect a display. it has PrimSurfSize.cx and PrimSurfSize.cy values of 1024 and 768 respectively. it also has another \00 sub key with values DwmClipBox.bottom and DwmClipBox.right that look like they’d need to be changed from 1024 and 768 respectively as well.
note: i havent tested any of this at all, just thought i’d mention it (still waiting on a good deal on an AMD card to run as my host gpu to take the plunge and ditch windows as my host OS)
edit: spun up a windows 10 VM real quick and tested those edits (started the vm headless with auto sign in enabled for the windows user, and connected to it with teamviewer) , and changing those registry values for the relevant key worked like a charm. not sure if having a separate graphics drivers installed in the guest will effect anything, but it doesnt seem like it should. someone should definitely give those tweaks a go and see if it works in a proper environment.
Is it normal that I got low UPS when GPU is under heavy load? For example I got 30-40FPS in Unigine Heaven but <= 10 UPS in looking glass client.
Can’t compile git head:
gcc -c -g -O3 -std=gnu99 -march=native -Wall -Werror -I./ -I../common -DDEBUG -DATOMIC_LOCKING -ffast-math -fdata-sections -ffunction-sections -D_REENTRANT -I/usr/include/SDL2 -I/usr/include/libdrm -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/harfbuzz -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/harfbuzz -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/spice-1 -DBUILD_VERSION='"a10-16-gbebbdc4089"' -o .build/decoders/h264.o decoders/h264.c
decoders/h264.c: In function ‘lgd_h264_initialize’:
decoders/h264.c:199:7: error: ‘VAProfileH264Baseline’ is deprecated [-Werror=deprecated-declarations]
VAProfileH264Baseline,
^~~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/va/va_glx.h:28:0,
from decoders/h264.c:28:
/usr/include/va/va.h:345:5: note: declared here
VAProfileH264Baseline va_deprecated_enum = 5,
^~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make: *** [Makefile:31: .build/decoders/h264.o] Error 1
Edit: Removing -Werror
avoids this error and it compiles (couldn’t test it yet though).