Return to

Compression/encoding for lg

I’m running lg at 2k/144hz and getting some noticeable latency. I know lg is not capable for this resolution and refresh rate and I see the bottleneck to be bandwidth(simple math).

My question is: why don’t lg use any compression and/or encoding? I’m asking because I see h264 implementation on client side(linux) but apparently not being used nor implemented on server(windows) side.

Because it’s not limited by bandwidth, but the DXGI capture API.

2k @ 144FPS = 2048×1080x4x144 = 1274019840B/s / 1024 / 1024 = 1215MB/s
Well below the maximum throughput of PCIe and DDR

Never completed and abandoned after proving that we are not limited by bandwidth.

I thought ivshmem involves some hard drive operations.

None at all, the file is a shared memory file, it’s not on the disk, but in RAM.

thanks! so what’s the bottleneck of lg right now?
I do feel latency running some games at [email protected] while others are less obvious. Is it related to color space as well?

Make sure you have a frame rate limit in your guest (60FPS) or enable vsync, and ensure you don’t overtax your guest GPU otherwise windows leaves no headroom for DXGI capture impacting the performance.

Note: When using LG, guest vsync does not add latency as it does for a physical monitor.

All captures are always allow 32-bit of colorspace, even if only 24-bit is actually used. This is due to performance reasons (aligned memory copies/access).

does this apply to nvidia api as well?

I probably overtaxed my CPU, does it matter?

Not so much, but it still has it’s limitations. Also NvFBC will be dropped at some point as NVidia have deprecated the API.

Very doubtful, the GPU transfer from GPU ram to system ram which is a GPU DMA process (does not involve the CPU) is the affected transfer.

thank you so much!

I’ll try with my new 3900x build after it arrives.

1 Like