Is Threat Interactive's (YT) game optimisation channel a long con?

Yes, however, in a related industruy (animation film), these are the numbers:

How do they get insane graphical improvements and we over here get… 40 FPS on a good day?

It’s offline rendering. They simply add a couple hundred more machines to the render farm :grin:

1 Like

I am aware it is not anywhere close to real-time.
I am just REALLY mad that 23 years of hardware improvement have scaled vastly different between movies and games.

Movie budgets have blown out to hundreds of millions.

Your PC budget has not.

1 Like

I thought I was seeing these video because I kept whining about the ghosting in stalker 2 and how when I turn the upscale technology off, I have no anti aliasing.

Darn, before I got to the end, all the good arguments have ended :frowning:

So just my two cents:

  • DLSS was initially pitched to Jensen as DLAA (anti-aliasing tech). There absolutely is an argument to be made about deep learning being used to enhance image quality.
  • Leather Jacket instead turned the idea of DLAA into DLSS which itself is a misnomer (it’s a sub-sampling, not super-sampling technology)
  • DLSS sub-sampling still makes sense from an image quality point of view. Since there is a ground-truth “real” frame from the engine, all the pixels can originate from the engine. It’s definitely better than just rendering at a lower resolution and upscaling to screen res using primitive interpolation.
  • Therefore, Is Threat Interactive's (YT) game optimisation channel a long con? - #65 by ThisMightBeAFish and at a few points before, @ThisMightBeAFish was right that DLSS may provide a better image quality. Although I’m not sure what’s the reference and target resolutions here and there.
  • Let’s be real, all the pixels are fake :wink:
  • The main problem starts with frame gen. Frames that did not originate from the engine represent a state that may be inconsistent with the engine.
  • There are games where that matters and even reflex can be somewhat detrimental (like showing an enemy in your crosshairs even though they were not in the original frame and will not be in the next “real” frame.
  • Then again, there are games where it makes absolutely no gameplay difference
  • Regardless, the analogy to video compression is a faulty one because, as others pointed out, the predicted frames are based on highly compressed information based on real frames they encode, not only the neighboring frames. Something that’s impossible without the source frame.
  • Super-resolution/“frame gen” tech in monitors? Well, as some pointed out already, it’s been in TVs for years now. Can be a real hit & miss. And terrible latency. Oh and most monitors try to minimize the latency, not introduce it.
  • If they “only doubled” at 4x gen it means the source frame rate fell down by half! So instead of 30 lows he got down to 15 fps lows in terms of source/engine frames.
  • Console vs PC vs VR: console doesn’t let you go wild with the mouse, rapidly turning around, shaking head and whatnot. Much easier to predict. PC does and AFAIK it’s where a lot of frame gen artifacting and weirdnesses occur. VR not only allows rapid movements (well, maybe your head doesn’t move as rapidly as your mouse, but still …) but also reacts quite badly to input lag and artifacting.
5 Likes

That is a misunderstanding of how Reflex 2 works.
The enemy was beside your crosshair in the real frame.
You move your mouse.
The picture gets updated by a “fake frame” to adjust for your input.
You press fire.
You were able to shoot “faster”
That is it.

It is almost useless for low latency local games and makes more sense for high latency stuff (+100ms) like that stupid Nvidia streaming service.

From a technological nitpicky point of view, yes you can’t compare it, because for video compression it is based on a real frame.

From a real life point of view, both are prone to artifacts, but at such a low level, it does not matter. And yes, even though video compression has a slight advantage because it is based on a real frame, I bet it still does not matter. It is not that hard to calculate an inbetween frame. It is hard or error prone to predict a frame.

Let’s try it differently, do you care that Netflix shows you H.265 instead of RAW?
RAW does not have rubber banding or other compression artifacts.

Let’s assume you are like me and do care because you also despise the shit quality streaming services provide.

Do you care that your BD uses H.265 instead of RAW?
No you don’t. Why?

And the answer to that will explain why we will play Civ 7 in 2030 with something like smooth motion :grin:

No, you misunderstand the problem.
The guy was never actually in your crosshairs.
By the time the engine recalculates input, he’s already moved.
On the next rendered frame, he’s already moved.
Reflex tells you that you’re aiming at him, meanwhile the engine tries to tell you you’re lagging behind.

2 Likes

The human head does not like rapid movement, which is why the eyes can move quite quickly :wink:

I don’t know where you got this misconception.

To make it simpler, let’s pretend for a moment that we don’t play against other players but a single player bot. That way we don’t mix server latency into the equation.

Your Ai enemy stands still and does not move. You come into the room.
You move the corsair to his head. But since you are streaming, there is a 140ms latency. By applying your mouse inputs into a fake frame instead of waiting for a new frame (with your inputs), you “reduce” latency. Now we can debate if that is “reducing” latency, but one thing is for sure, it improves your aim (in these high latency scenarios).

But you don’t have to believe me, Nvidia did a study.

The argument for DLSS upscaling improving image quality is easier to envision by forgetting about DLSS and AI and consider why developers have often used upscaling traditionally. Performance is always constrained when rendering a scene in real time so there’s always a balance between rendering complexity and rendering resolution. The lack of either leads to reduced image quality but the best looking scenes are often rendered below the resolution of the screen. Rendering at a lower resolution frees up performance that can then be put towards either more complex rendering or rendering at a higher frame rate. Think back to consoles ports where in order to hit resolution and refresh targets upscaling was often used because the alternatives were either a dismal frame rate or completely removing rendering features that were needed for a faithful reproduction of the PC original.

The confusion comes in because DLSS upscaling is marketed as improving both performance and image quality. The performance improvements from DLSS come from upscaling while the real quality improvements come from the ability to trade the performance gained by rendering at a lower resolution for increased rendering complexity (turning up in game settings), not just by enabling DLSS in isolation. The extent of image quality improvements from DLSS itself are realized by the DLAA part of DLSS which without upscaling is essentially just exemplary temporal antialiasing.

DLSS upscaling is also exemplary but it’s important to not go overboard and imagine a future where upscaling is the source of detail. Upscaling is still relegated to the task of reducing artifacts without sacrificing detail rather than creating detail. The more upscaling pushes towards more detail the more artifacts it will generate. The traditional example being over sharpening of the upscaled image.

You’re considering only the best case scenario and ignoring worst case real world scenarios. Imagine you’re clearing a building and heading towards a hallway that leads off to the left. You position your crosshair just right of the corner leading into the hallway because that’s where an enemy would be reveled and you strafe right and turn left to expose the hallway. Reflex 2 is tasked with not only inpainting the right edge of the screen but the right edges of all foreground occluding objects which includes the part of the hallway being exposed right where you’re attention is fixed in anticipation of an enemy appearing. When an enemy does appear not only will Reflex 2 will be oblivious to this fact until the next frame is rendered but it may incorrectly predict an enemy or other object appearing in the inpainted region. It’s also important to not ignore the time taken to shift the image and perform inpainting that introduces a delay where without reflex 2 the emerging enemy would have been presented without delay.

I imagine nvidia’s engineers are smart enough to purge enemies and other objects emerging from behind occlusions from the training set to reduce the models tendency to inpaint hallucinated objects but it is likely that this will be an artifact especially with complex backgrounds. An occluded enemy can never be reveled confidently by inpainting which is why nvidia’s marketing and testing highlights static scenes where players are standing still tracking exposed targets. I’m not convinced that Reflex 2 will be a net gain but it will be interesting to see if it proves to offer any competitive advantage or not.

3 Likes

Nothing in what your described has ANY EFFECT ON THE GAME ENGINE, THE PART OF THE PROCESS THAT ACTUALLY DOES THE CALCULATION ON WHERE THE CROSSHAIR WAS WHEN THE MOUSE BUTTON WAS PRESSED!

Even if you played Duck Hunt on the SNES then your reasoning for “better aim” would remain incorrect since the SNES was not yet checking for “target hit?” when this imaginary system flashed the white box on screen in anticipation.

2 Likes