Looking Glass - Triage

Did you make sure the PCI-E topology is set to the correct cores? That’s critical for multi-die chips like EPYC. And do the dies set to the VM have access to direct memory? (AKA every memory channel is populated)

I’m not actually on an Epyc, Im on a Ryzen 1700, and that is what “Copy Host CPU” presents to the os. Man i wish had that 128 cores, that would be awesome.

If you notice on the actual display in the video, the HP Lapdock, its running butter smooth, just the KVMFR client is jittering.

I’ll see if there is something I can do to pin the cores to see if that resolves anything.

Use the 2nd half of your cores and isolate those cores. If you don’t isolate cores for the VM, interrupts could be running on cores shared with your host OS.

1 Like

Doesnt seem to be the issue. It’s strange because it only seems to be the DXGI application that is effected. When i switch things back to an RX 580, no issues what so ever, on either screen through looking-glass-client or on the external display.

Is there a debug mode that i can throw the looking-glass-host into to capture messages and errors?

Thanks again to @gnif for creating this awesome project and this forum for the amazing support they are doing.

Someone directed me to this thread, hope I’m not too late on this since I see the discussion is about a week old. I’m with the OBS team. Just wanted to make sure it was abundantly clear that OBS is not using NVFBC in the new build. We have the same restrictions on NVFBC that Looking Glass does. The primary performance improvement in the new build is that OBS now sends frames from OBS’s renderer directly to NVENC without using system RAM. It has nothing to do with the Capture API at all.

Also this build is not and has not ever been exclusive to “influencers”. You can try the beta build here right now if you want: https://obsproject.com/forum/threads/nvenc-performance-improvements-beta.98950/

And as you noticed, the code is all in Git. This isn’t a “non-git” build, it’s just not a “git master” build (yet). There is no NDA hiding any part of this, I’m not sure where that notion came from. As for GDQ, not only do they build OBS themselves (Jim doesn’t do it for them), but their code is public as well.

Sorry for poking into a thread for another application, just wanted to make sure any misinformation is dispelled.

10 Likes

Thank you for dispelling the rumours. :+1:

1 Like

Huh, so it’s a NVENC function rather than a NVFBC function… It’s unfortunate NVENC cannot encode anything other than H264 and H265, so we’re no closer to NVFBC to be exposed.

The notion came from the fact Jenson advertised the HECK out of this in his CES presentation which is usually in a section reserved for “major strategic deals.” And GDQ has plainly said they are thankful for “special builds” when now we know it’s just building from Git. :expressionless: Oh, and influencers GOT a GPU just to test this, and the NVIDIA general NDA applies when you receive a GPU for review.

Would this “texture streaming” direct to the encoder work in conjunction with SLI mode? Since the capture API hasn’t changed for DirectX for lower level access, could the game be rendered on a primary GPU, then OBS runs on a separate dedicated NVIDIA GPU by launching it temporarily with the primary monitor on the dedicated GPU, then switching the primary monitor back to the “game” GPU? This is how I got around 4K performance issues by dedicating the OBS canvas and NVENC to a separate GPU.

Sort of, but really it’s neither – it’s an OBS function. The optimization was made in OBS code related to the way OBS handles the data and passes it off to the encoder. That’s why the performance improvement affects all cards that have NVENC enabled.

To be clear, the OBS team has done a good amount of work on OBS for GDQ, but it’s stuff that always makes it into the main build. In the case of AGDQ 2019, for example, there was some new functionality added that allowed direct output to Decklink cards, which was merged to master recently, but isn’t part of a release build yet. The UI portion of that change is in a pull request (and was part of the build GDQ made) but it’s not likely that will make it into master just because the entire outputs section is in need of a UI overhaul. But the code is still all public.

Jim did receive an RTX card to use in testing the new changes, sure, but as @gnif mentioned it would violate the GPLv2 to release a binary of a GPLv2 program without also releasing the code. It sounds like this was just a case of misunderstanding of the nature of the relationship between Nvidia and OBS in this process.

I can’t say for sure, but my guess is no, or not very well. SLI tends to make OBS run worse due to the need to share data between the two cards, which is slow. Your best bet is always to run OBS, NVENC, and whatever you’re capturing all on the same card.

I don’t want to clog up this thread with non-Looking Glass talk, just wanted to make the clarification that OBS isn’t using NVFBC. If you have more OBS questions I recommend going to the OBS forums or Discord and asking there.

5 Likes

I’ve been asking years ago for Decklink output from the OBS canvas. THANK YOU.

Has this been tested in Linux? @wendell might be our first guinea pig for that. Or @eposvox?

Seriously, Decklink output actually means A LOT to people trying to build cheaper Vmix/Tricaster alternatives.

Anyways, that’s all I’ll say on this issue. Let’s get back to Looking Glass.

@feekes the issue you are seeing I have seen in some titles, of most note the Unigine Valley Benchmark where the solution is a simple as tabbing out and back into the program. Please also if you have the option try to run the game in Windowed mode. The stutter you’re seeing seems to be common on recent (Pascal or later) NVidia hardware when using DXGI DD capture.

I wouldn’t at all be surprised if NVidia are intentionally crippling DXGI DD capture to make NvFBC a desirable thing.

@dodgepong thank you for clearning all that up, if you do ever happen to work out some kind of deal with NVidia for NvFBC please do let us know!

2 Likes

@gnif Come to think of it, i remember you stating this exact thing in one of the looking glass videos. I’ll have to throw the 2060 back in there and give it another shot.

Another nugget for thought, it exibited the same behavior in normal 2d applications like with just the windows desktop. Anything that i can run to collect logs or help with debugging I will gladly do.

Thank you so much for your awesome work.

Strange, I assume it’s still 1080p and you’re using A12? And your guest is a i440fx or q35 machine?

Please note that you’re the first person to my knowledge that has reported the use of a RTX card with LG so you’re in uncharted waters.

Thanks but it either works completely or doesn’t at all, there are unfortunately no useful logging/debug information with performance issues. The only real way to work on these is to either profile the code yourself, or help me to replicate it so I can do it, which in this case would require a RTX card.

You’re most welcome.

I am on A12 using q35 and of course ovmf. I am running at 1080 and have increased the memory up 64 meg, sad attempt by me to see what changed, but I still see it only using 32. You did say that it wouldn’t go over and guess what, the author knows his code lol.

I purchased the RTX card mainly for testing with looking glass and showing some of my students in class the really amazing things that Linux is capable of. Who knew it would be so different than their previous cards in this application, but what fun would it be in charted waters right?

I’m sure that there can be arranged a way for you to aquire an RTX card if that is something you would be interested in.

Please be sure to run a very recent version of Qemu, 3.2 or later IIRC and specify the PCIe link width and speed. By default NVidia cards on a root port in a q35 machine will only report a x1 link and the NVidia driver will configure the SOC for it. Please see this thread for more information: Increasing VFIO VGA Performance

Wow! I am glad to have LG as part of the lecture, I am truly honoured :slight_smile:

Most certainly, LG is one of those projects that requires multiple combinations of hardware to test and develop against. It’s near impossible to profile a performance issue without the hardware on hand.

Running the LTS Ubuntu 18.04, so im sure my qemu is older, not to mention I’m actually running the machine through libvirt and not directly. Intersestingly enough the GPU performance is spot on when using an external display, and infact the benchmarks come back identical when just using the looking glass client, its some oddity in the displaying of the image back that is making it appear to stutter. No stutter is actually present in the rendering.

I’ll have to get some more details to how i could get one sent to you. Not sure where you are located or if you would even want people sending you hardware. That could get messy quick. If only there was someone you have delt with in the past that had the ears of Nvidia. *cough *cough @wendell

If you are outside of the united states, I will gladly contribute the majority to a “Get this guy an RTX” fund so you can purchase one locally and not incur a crazy needless shipping cost.

That’s not an issue, you can continue to do so. Updating qemu under libvirt generally works without an issue from my experience.

The patch that adds link speed negotiation support will most certainly be missing from your Qemu build as an official release has not been made since that code made it into git.

Correct, we have seen this behaviour in Discord and found that the link speed issue is the cause. The GPU is throttling it’s GPU RAM->System RAM copy, likely under the false assumption that the transfer will block the PCIe bus for too long while streaming textures.

Unfortunately NVidia have been completely silent on this project as they do not support nor want people using consumer nvidia cards for passthrough. I am in Australia so if you’re posting international as @wendell discovered the cost is astronomical unless you’re sending from China.

Thank you kindly however I already have a GoFundMe in progress for a Quadro so I can perfect the NvFBC support. Simply to be organised I am not keen to run multiple fund raisers at once as tracking the financials would become a nightmare. However if you really wish to donate towards LG RTX hardware you can donate directly via Paypal if you like.

1 Like

I will definately give those suggestions a shot. Looks like I need to read a little more into the past to see what problems I face in my future. :slight_smile:

Hopefully we can get you the Quadro you need to further development. Don’t have a whole lot being in education but I hope the little i can contribute helps.

Good luck on your ventures and thank you for being such an approachable developer and role model for those young ones looking at the open source space. Too offten they get hit with the old RTFM mentality.

1 Like

Hello all,
I’m trying to use the Looking Glass software to help me display output from my kvm to a window so that I don’t need an external montior (thus keeping the portability of a laptop) and am running into some issues. The looking glass quick setup does not talk about using virt-manager and instead uses the qemu command line. I’m not sure how the qemu command line works and there seem to be no tutorials on how to set this up using virt-manager. In my research I started seeing that optimus cards can not even do VGA passthrough, so am what I am trying to do even possible? And if so, how do I do it? Are there alternatives that would achieve the same results with similar performance to looking glass?

  1. You should use libvirt from the quick setup instead of qemu command line since you want to use virt-manager also.

  2. It is sometimes possible.

  3. See here for how to do it if your laptop is mux-ed- https://gist.github.com/Misairu-G/616f7b2756c488148b7309addc940b28

  4. If there were other fast alternatives then why would gnif work on looking glass? You could use rdp or similar although those are much slower then looking glass.

i cant get looking glass host to work on windows.
the GPU is passed through and its driver is installed.
when i run the command prompt as administrator and issue looking-glass-host.exe -f
this error happens.

[I] CaptureFactory.h:83 | CaptureFactory::DetectDevice | Trying DXGI
[I] DXGI.cpp:232 | Capture::DXGI::Initialize | Device Descripion: Microsoft Basic Render Driver
[I] DXGI.cpp:233 | Capture::DXGI::Initialize | Device Vendor ID : 0x1414
[I] DXGI.cpp:234 | Capture::DXGI::Initialize | Device Device ID : 0x8c
[I] DXGI.cpp:235 | Capture::DXGI::Initialize | Device Video Mem : 0 MB
[I] DXGI.cpp:236 | Capture::DXGI::Initialize | Device Sys Mem : 0 MB
[I] DXGI.cpp:237 | Capture::DXGI::Initialize | Shared Sys Mem : 4094 MB
[I] DXGI.cpp:241 | Capture::DXGI::Initialize | Capture Size : 1024 x 768
[E] DXGI.cpp:293 | Capture::DXGI::Initialize | Failed to create D3D11 device: 0x887a0004 (The specified device interface or feature level is not supported on this system.)
[E] CaptureFactory.h:92 | CaptureFactory::DetectDevice | Failed to initialize a capture device
Unable to configure a capture device

Press enter to terminate…

not sure if it matters, but the VM’s graphics card is an AMD radeon RX 590.