That was it. Used to using AMD hardware, why must nvidia make things more difficult.
Thank you so much.
That was it. Used to using AMD hardware, why must nvidia make things more difficult.
Thank you so much.
Because they don’t cater to our use case.
They do that on the consumer cards so that enterprises will be forced to buy quadros.
Totally understand form a business perspective. They are the ruling class of GPU manufacturer. It kinda made me feel a little dirty installing an Nvidia gpu in the system. lol
As an update to the RTX 2060 and Looking Glass, works great in everything i have tried except the benchmark of Wildlands. It gets a great frame rate, but appears choppy through the looking glass client, not on the external monitor. Interesting to say the least.
Wanted to share my expierience with the RTX 2060. Its stuttering on the client, but not on the monitor. Wanted to share this incase there is something that i can test for furthing the development of the project, or if it is something someone else has seen already. Attached is the final results of the benchmark run and a link to the video to see the stuttering on the client and the fluidity on the external display.
Might be a few minutes till the upload is finished.
Did you make sure the PCI-E topology is set to the correct cores? That’s critical for multi-die chips like EPYC. And do the dies set to the VM have access to direct memory? (AKA every memory channel is populated)
I’m not actually on an Epyc, Im on a Ryzen 1700, and that is what “Copy Host CPU” presents to the os. Man i wish had that 128 cores, that would be awesome.
If you notice on the actual display in the video, the HP Lapdock, its running butter smooth, just the KVMFR client is jittering.
I’ll see if there is something I can do to pin the cores to see if that resolves anything.
Use the 2nd half of your cores and isolate those cores. If you don’t isolate cores for the VM, interrupts could be running on cores shared with your host OS.
Doesnt seem to be the issue. It’s strange because it only seems to be the DXGI application that is effected. When i switch things back to an RX 580, no issues what so ever, on either screen through looking-glass-client or on the external display.
Is there a debug mode that i can throw the looking-glass-host into to capture messages and errors?
Thanks again to @gnif for creating this awesome project and this forum for the amazing support they are doing.
Someone directed me to this thread, hope I’m not too late on this since I see the discussion is about a week old. I’m with the OBS team. Just wanted to make sure it was abundantly clear that OBS is not using NVFBC in the new build. We have the same restrictions on NVFBC that Looking Glass does. The primary performance improvement in the new build is that OBS now sends frames from OBS’s renderer directly to NVENC without using system RAM. It has nothing to do with the Capture API at all.
Also this build is not and has not ever been exclusive to “influencers”. You can try the beta build here right now if you want: https://obsproject.com/forum/threads/nvenc-performance-improvements-beta.98950/
And as you noticed, the code is all in Git. This isn’t a “non-git” build, it’s just not a “git master” build (yet). There is no NDA hiding any part of this, I’m not sure where that notion came from. As for GDQ, not only do they build OBS themselves (Jim doesn’t do it for them), but their code is public as well.
Sorry for poking into a thread for another application, just wanted to make sure any misinformation is dispelled.
Thank you for dispelling the rumours.
Huh, so it’s a NVENC function rather than a NVFBC function… It’s unfortunate NVENC cannot encode anything other than H264 and H265, so we’re no closer to NVFBC to be exposed.
The notion came from the fact Jenson advertised the HECK out of this in his CES presentation which is usually in a section reserved for “major strategic deals.” And GDQ has plainly said they are thankful for “special builds” when now we know it’s just building from Git. Oh, and influencers GOT a GPU just to test this, and the NVIDIA general NDA applies when you receive a GPU for review.
Would this “texture streaming” direct to the encoder work in conjunction with SLI mode? Since the capture API hasn’t changed for DirectX for lower level access, could the game be rendered on a primary GPU, then OBS runs on a separate dedicated NVIDIA GPU by launching it temporarily with the primary monitor on the dedicated GPU, then switching the primary monitor back to the “game” GPU? This is how I got around 4K performance issues by dedicating the OBS canvas and NVENC to a separate GPU.
Sort of, but really it’s neither – it’s an OBS function. The optimization was made in OBS code related to the way OBS handles the data and passes it off to the encoder. That’s why the performance improvement affects all cards that have NVENC enabled.
To be clear, the OBS team has done a good amount of work on OBS for GDQ, but it’s stuff that always makes it into the main build. In the case of AGDQ 2019, for example, there was some new functionality added that allowed direct output to Decklink cards, which was merged to master recently, but isn’t part of a release build yet. The UI portion of that change is in a pull request (and was part of the build GDQ made) but it’s not likely that will make it into master just because the entire outputs section is in need of a UI overhaul. But the code is still all public.
Jim did receive an RTX card to use in testing the new changes, sure, but as @gnif mentioned it would violate the GPLv2 to release a binary of a GPLv2 program without also releasing the code. It sounds like this was just a case of misunderstanding of the nature of the relationship between Nvidia and OBS in this process.
I can’t say for sure, but my guess is no, or not very well. SLI tends to make OBS run worse due to the need to share data between the two cards, which is slow. Your best bet is always to run OBS, NVENC, and whatever you’re capturing all on the same card.
I don’t want to clog up this thread with non-Looking Glass talk, just wanted to make the clarification that OBS isn’t using NVFBC. If you have more OBS questions I recommend going to the OBS forums or Discord and asking there.
I’ve been asking years ago for Decklink output from the OBS canvas. THANK YOU.
Seriously, Decklink output actually means A LOT to people trying to build cheaper Vmix/Tricaster alternatives.
Anyways, that’s all I’ll say on this issue. Let’s get back to Looking Glass.
@feekes the issue you are seeing I have seen in some titles, of most note the Unigine Valley Benchmark where the solution is a simple as tabbing out and back into the program. Please also if you have the option try to run the game in Windowed mode. The stutter you’re seeing seems to be common on recent (Pascal or later) NVidia hardware when using DXGI DD capture.
I wouldn’t at all be surprised if NVidia are intentionally crippling DXGI DD capture to make NvFBC a desirable thing.
@dodgepong thank you for clearning all that up, if you do ever happen to work out some kind of deal with NVidia for NvFBC please do let us know!
@gnif Come to think of it, i remember you stating this exact thing in one of the looking glass videos. I’ll have to throw the 2060 back in there and give it another shot.
Another nugget for thought, it exibited the same behavior in normal 2d applications like with just the windows desktop. Anything that i can run to collect logs or help with debugging I will gladly do.
Thank you so much for your awesome work.
Strange, I assume it’s still 1080p and you’re using A12? And your guest is a i440fx or q35 machine?
Please note that you’re the first person to my knowledge that has reported the use of a RTX card with LG so you’re in uncharted waters.
Thanks but it either works completely or doesn’t at all, there are unfortunately no useful logging/debug information with performance issues. The only real way to work on these is to either profile the code yourself, or help me to replicate it so I can do it, which in this case would require a RTX card.
You’re most welcome.
I am on A12 using q35 and of course ovmf. I am running at 1080 and have increased the memory up 64 meg, sad attempt by me to see what changed, but I still see it only using 32. You did say that it wouldn’t go over and guess what, the author knows his code lol.
I purchased the RTX card mainly for testing with looking glass and showing some of my students in class the really amazing things that Linux is capable of. Who knew it would be so different than their previous cards in this application, but what fun would it be in charted waters right?
I’m sure that there can be arranged a way for you to aquire an RTX card if that is something you would be interested in.
Please be sure to run a very recent version of Qemu, 3.2 or later IIRC and specify the PCIe link width and speed. By default NVidia cards on a root port in a q35 machine will only report a x1 link and the NVidia driver will configure the SOC for it. Please see this thread for more information: Increasing VFIO VGA Performance
Wow! I am glad to have LG as part of the lecture, I am truly honoured
Most certainly, LG is one of those projects that requires multiple combinations of hardware to test and develop against. It’s near impossible to profile a performance issue without the hardware on hand.
Running the LTS Ubuntu 18.04, so im sure my qemu is older, not to mention I’m actually running the machine through libvirt and not directly. Intersestingly enough the GPU performance is spot on when using an external display, and infact the benchmarks come back identical when just using the looking glass client, its some oddity in the displaying of the image back that is making it appear to stutter. No stutter is actually present in the rendering.
I’ll have to get some more details to how i could get one sent to you. Not sure where you are located or if you would even want people sending you hardware. That could get messy quick. If only there was someone you have delt with in the past that had the ears of Nvidia. *cough *cough @wendell
If you are outside of the united states, I will gladly contribute the majority to a “Get this guy an RTX” fund so you can purchase one locally and not incur a crazy needless shipping cost.
That’s not an issue, you can continue to do so. Updating qemu under libvirt generally works without an issue from my experience.
The patch that adds link speed negotiation support will most certainly be missing from your Qemu build as an official release has not been made since that code made it into git.
Correct, we have seen this behaviour in Discord and found that the link speed issue is the cause. The GPU is throttling it’s GPU RAM->System RAM copy, likely under the false assumption that the transfer will block the PCIe bus for too long while streaming textures.
Unfortunately NVidia have been completely silent on this project as they do not support nor want people using consumer nvidia cards for passthrough. I am in Australia so if you’re posting international as @wendell discovered the cost is astronomical unless you’re sending from China.
Thank you kindly however I already have a GoFundMe in progress for a Quadro so I can perfect the NvFBC support. Simply to be organised I am not keen to run multiple fund raisers at once as tracking the financials would become a nightmare. However if you really wish to donate towards LG RTX hardware you can donate directly via Paypal if you like.