NvFBC or DXGI? I never have any issues with DXGI.
I have dumps and logs which I could post as an issue on Github. Using DXGI (since NvFBC doesn’t work for me anymore) with the host often crashing when resolution/refresh rate changes (like from 720p to 1440p, etc.).
I’ve tried to redirect output various ways but I guess stdout isn’t used but it is directly written to a new console windows (tried -NoNewWindow with start-process as well).
Please do, including steps that reliably reproduce it, preferably through means other then a game (ie. change desktop resolution).
Correct, it’s not a true console.
Currently still uploading dumps (about 33 Dumps, each ~30 MiB compressed) and for each I have a text file of the same name with console output copied into it and a short description of what I did in file name. Hosted in a mega.nz Cloud Service folder.
Described the problem best I could and an example scenario in the GitHub issue report but since there are different circumstances in which LG errors out with different error output, or even the same scenario in which it doesn’t error, I couldn’t isolate the the cause. Just changing desktop resolution doesn’t cause this issue but fullscreening game or alt-tabbing back to a game in full-screen mode usually triggers it.
I hope this helps in any way and you are able to dig through and find something.
Edit: Issue posted here
For now I’m just relying on a hostkey to a PS snippet to restart looking-glass-host when I notice that it has errored which I can still invoke through spice while LGs client is focused.
Followed the guides, forums and youtube channel (with a couple of minor fallbacks). And everything works like a charm.
I’m on Fedora 27, latest kernel (4.16).
My hardware is:
Asus Prime X370-pro
Ryzen 5 1600
Sapphire 580 Nitro+ 4GB (for passthrough)
Booting windows of a SSD disk.
Tried with some games, Doom, Overwatch, Far Cry and more.
Also bought a Aten USB KM U224 switch (for the keyboard, mouse and headset).
This is by far the coolest thing I’ve done!
Big thanks to everyone on the forum, Level1Techs crew, Geoffrey and everyone involved in this project!!
how well do your games perform and what resolution are you running? Reason i ask is i have a similar setup 1600x/b350 and may spend some dough on an x370 if yours plays well.
What gpu are you using for the host?
Running at 1920x1080 and I would say they run as near bare as they could.
On the host I have a Nvidia Quadro 4000 at the moment, but have been trying GTX 1050 and GTX 1060.
Reason I bought 580 was because of the error 43 thing, couldn’t get it to work.
I had some lagspikes due to bad drivers in the beginning, so thats a big deal.
is the error 43 hard to get around now? I thought i was simple config file edit to hide the vm from thinking it was a VM?
Error 43 requires a couple of mods to the XML of the VM.
You can open the XML by typing
virsh edit <VMname>
If it says it can’t see the VM try running it as an administrator.
There, you have to find the
<kvm> part and add:
<kvm> <hidden state='on'/> </kvm>
After that you have to find the
<hyperv> part (also under features) and add
<hyperv> ... <vendor_id state='on' value='whatever'/> ... </hyperv>
And then you save the file, virsh checks the file for errors and after a full VM restart the setting are applied.
My experience is that it doesn’t work on all GTX cards though. My Evga GTX 1060 SC and 1050 SC didn’t work with this workaround.
Works fine on all cards I have tested on, including my EVGA GTX 1060 SC.
Maybe you can try to completely disable HyperV extensions. For me (MSI 1060 3GB OC) it was just the stuff I wrote before.
Ok thanks for the response. Just to clarify though, is it possible to do this with integrated graphics (specifically the iGPU on the i7 6700k) as the host and the GTX 970 as the guest? Also is it possible to switch the GPU from guest to host when guest is not on?
Yes (assuming board support).
Not an expert on this, but as I read it so far no, since the GPU is isolated from the system on boot time.
Are you able to send a link to the switch? I’ve tried to google it but I’m not finding the exact model.
And semi off topic: Is a switch necessary right now? I thought I heard somewhere about USB passthrough without any switches but I’m not 100% sure. Thanks.
It’s a typo, sorry, try US224 and you should find it.
I’m passing a USB controller and use the switch for that, works great (I’m not using spice).
#(LookingGlass should have its own sub category under the Linux sub cat?! Would be handy, looks busy atm.)
I moved from desktop to laptop a year ago, buying this laptop thinking all laptops had iGPU and a discreet GPU. Turns out on this machine the iGPU had been factory disabled, and with many new machines I see,
My intention was to have Linux as the host OS, and Windows as a virtual machine. Host using the laptop screen and client Windows using the discrete. This is still the plan for me.
Question is, can a laptop be used with the new LookingGlass? iGPU running the display with Linux, and Windows running as a VM with the discrete. Then having the discrete framebuffer brought over with LookingGlass.
It would ask for the iGPU to remain dominant when the discrete is in use, but not switch.
While this idea is still in my head and while im considering replacing this laptop to accomplish this, I just wanted to see if it is even theoretically possible.
Yes it can
I keep getting an error during the build process:
spice/spice.c: In function ‘spice_connect’:
spice/spice.c:136:3: error: ‘strncpy’ specified bound 32 equals destination size [-Werror=stringop-truncation]
strncpy(spice.password, password, sizeof(spice.password));
compilation terminated due to -Wfatal-errors.
cc1: all warnings being treated as errors
make: *** [Makefile:39: .build/spice/spice.o] Error 1
==> ERROR: A failure occurred in build().
==> ERROR: Makepkg was unable to build looking-glass-git.