Some changes have been made over the last 48 hours, one of which is an update to the host exe that improves capture performance dramatically. 4K 60Hz should be entirely viable now, once it has had some extra testing and I am sure it’s running stable I will release A12.
Shit, just notice one can actually use looking-glass without any dummy device what so ever.
Quadro drivers actually support loading EDID from a file, just grad a EDID file and load it through Nvidia Control Panel, and you are good to go. Not sure if GeForce can do this.
Probably not. Workstation section is not part of Geforce NvCP.
So I have been unable to get the a11 host application running. The IVSHMEM device is in the Libvirt XML and the windows driver successfully installed. The host application silently crashes if I normally run the exe and has the program has stopped working message if I run it with -f, see below. Running it as admin makes no change.
Host os= Debian Stretch, kernel 4.9.82 with ACS patch
Host gpu = gt 710 (with nouveau)
Guest OS= Windows Server 2016
Guest gpu = gtx 750 ti
Anyone know what the program to check memory copy latency was that @wendell used in the video demonstrating the gains of the sse memory copy? I would like to run that on my system.
AS I understood it was custom.
It is neat to try this out though I don’t think it’s the same as what’s in LG now but here are my results from a few tests:
No changes, out of the box from source using memcpy on my Ryzen 1700 clocked at 3.75 GHz with 2933MHZ DDR CAS14 RAM with no march flag.
32 MB = 0.879356 ms
-Compare match (should be zero): 0
32 MB = 3.686604 ms
32 MB = 3.614658 ms
with memcpy_sse it gets a little worse with no march optimizations:
32 MB = 1.884909 ms
However, it’s more consistent now when I change march targets:
32 MB = 1.864478 ms
32 MB = 1.837544 ms
This isn’t gnif’s assembler memcpy but it’s interesting to see how well the default gcc values with the built in memcpy works in 32-bit mode on my system.
Update: Overclocked my memory to 3066 and was seeing closer to 0.83 ms
I switched to the latest version of the memcpySSE.h from the source and it was slightly worse at 0.88 ms or so.
Maybe I’m overthinking it but these results seem to suggest to me at least that 144hz at 1440p should be doable with those copy speeds.
I think, gnifs annoucement of breakthrough is younger then source code on github.
Let me know what you people think.
It isn’t, but there are some blockers causing other issues at the moment and for now I have run out of free time to work on LG. Next month I should have more time to devote to this project.
@gnif I think we should have a monthly thread for LG like with what we do in the lounge. This thread has become a bit bloated and difficult to maintain.
Can you make a Gitlab repo clone just in case the terms of service change on Github where MS gets unrestricted non-exclusive rights to your code? JUST in case that relationship goes sour.
That can happen IF it comes that far.
Can we not make this thread about mimimi Microsoft is evil?
What is it? Typing error? I’m trying the unix socket, but it does not work.
I run the Qemu with the parameter:
qemu-system-x86_64 -spice unix,addr=/run/user/1000/spice.sock -device ivshmem-plain,memdev=ivshmem -object memory-backend-file,id=ivshmem,share=on,mem-path=/dev/shm/looking-glass,size=32M ...
looking-glass-client -p 0 -c /run/user/1000/spice.sock -F [I] main.c:692 | run | Looking Glass () [I] main.c:693 | run | Locking Method: Atomic [I] main.c:686 | try_renderer | Using Renderer: OpenGL [I] main.c:775 | run | Using: OpenGL [I] spice.c:151 | spice_connect | Remote: /run/user/1000/spice.sock [E] spice.c:657 | spice_connect_channel | connect code error 7 [E] spice.c:167 | spice_connect | connect main channel failed [E] main.c:868 | run | Failed to connect to spice server
When I use port 5902 it works, but unix socket does not work.
I’m using version a11 - https://aur.archlinux.org/packages/looking-glass/
Does your current user have permission to access the socket? (
ls -l /run/user/1000/spice.sock)
srwxr-x--- 1 admin users 0 13. čen 15.23 /run/user/1000/spice.sock
And what user are you running as? To access this socket you need to either be
admin or part of the
users group (although the
users group doesn’t have write access).
After qemu starts, you can
chown the socket to your current user (
sudo chown $UID /run/user/1000/spice.sock), which will give you access to it.