Just a F.Y.I for those using Looking Glass and getting less fps then what your in-game fps displays. There is a way to remedy that but at a performance hit.
RTSS had a new feature implemented awhile back which was designed to emulate a V-sync on experience while having V-Sync off. It was called Scanline Sync. We don’t particular care about that, but they did implement a way to flush GPU buffers while putting in the feature. Doing so prevents the guest GPU from utilizing itself to 100%. This leaves room for the capture API to do it’s thing, and send frames to our host GPU.
All that needs to be done is to create a profile for your preferred game, for this example DOOM, and set Scanline Sync to any number.
Once your profile is created, go to the RTSS config file that was just created for your game and add
SyncFlush=2 under the
Once done start up your game and check out the difference. Changes can be done on the fly by flipping Scanline Sync to 0 in RTSS to measure ‘on’ and ‘off’ performance. Check out the UPS differences below.
This has worked for every game I’ve tried, at a cost of around 10 fps, and 20-ish if you’re already pushing the the triple digit ranges. You can even full screen now to reduce input latency even more. Unfortunately, for those wanting to hit triple digit frame rates, you’re going to need some serious hardware to push that while flushing the GPU.
Doing this all has made me realize I really need a new graphics card. Anyway, you should be able to ‘see’ all the frames now. I’ve been running this for a few months and never bothered to make a post about it. Figured I’d get around to it sometime.
Just thought I’d get the message out. If anyone has a 1080ti/Vega or above, I’d really like to see how it works for you!
This is awesome, it should help those with DXGI capture, I will do some testing today on my 1080Ti and see how this goes.
Actually this is exactly what we want, we don’t care for real v-sync, just a frame rate limiter to keep the GPU from 100% load, leaving some room for capture. This way we get the frames as fast as we can draw them without ever waiting on monitor vsync.
Edit: Initial testing seems to show that setting
SyncFlush=2 increases the capture latency by 1 frame, further testing will be required to confirm this.
Edit2: It seems that
SyncFlush is not actually useful to LG as it’s all about determining how accurate the scan-line sync timing is to prevent tearing on a real monitor. Since LG takes full frames and is not affected by tearing at this point in the process,
SyncFlush is not going to help here. The only time LG might see tearing is on the client side render, which is decoupled from the capture itself.
Hm, interesting, this does seem to help in my use case which is higher resolution but I need to test with some more games where I can consistently push higher frames. WoW doesn’t seem to stutter quite as much, UPS is definitely reported as higher, but due to just the way the game engine is I don’t always have high frame rates.
EDIT: I did a test with Rocket League and I couldn’t really tell much of a difference when looking at the raw numbers, for me it’s generally around 75-95 UPS and the game is usually consistent at 100 FPS but does drop down to the mid 90s or so. Overall smooth, mostly because the GPU isn’t being fully utilized in that title.
Then I tried Talos Principle since it has a nice built in up to 60 second benchmark and is pretty enough to heavily load the GPU, I tested with DX11 first. When the benchmark first starts it is especially low, 25 UPS or so, very stuttery, throughout the whole benchmark it doesn’t really go above 65 maybe 70, depends on the scene of course, and the benchmark averages 97 FPS. Applying this and running the benchmark again the opening scenes are up around 60 UPS, much smoother, and it looks a lot better throughout the entire benchmark with UPS going even up to 85-90. What’s interesting is this lowers the benchmark average FPS to 77, but overall it was more consistent.
I’m going to test the game with the other graphics API after I get home just to see how that affects performance of the game, I think I read a big improvement with DX12 recently and I see it’s flagged as beta.
EDIT2: DX12 wasn’t working for me and Vulkan had a weird resolution, no further testing but so far this is actually an improvement overall in my use case, I’ll probably apply it globally after some more testing.