Is Threat Interactive's (YT) game optimisation channel a long con?

I wasn’t really intending to single you out, meant “people” in general, because it’s a definite trend in ‘the industry’. Reading your post was just what triggered the thought (and the same thing I commented on LTT’s video that gushed about it).

It always struck me as strange that the flat side of gaming was trying to rush headlong into frame gen as much as possible, while the VR side that actually ‘needs’ a sorta similar concept (still not really frame generation, it’s not trying to advance/interpolate frames, just re-use the same one but with a new orientation based on the headset movement) is trying as hard as possible to NOT need it.

I just though you had a good point. It is pretty easy to get wrong… if your tracking inputs have latency or jitter you’re doomed from square one. Or mispredict the orientation of the camera in the next frame for any reason and the effect will end up being detrimental.

Yeah, that’s still optimization in a nutshell VR or not… if any dev can find a way to hit higher base resolutions and frame rates or lower latency then they’ll do so at every opportunity.

Sure, this has a (minor) negative impact on image quality.
But again, image quality isn’t important for a game that uses Reflex.
More important is the aim. And if the aim is improved because my corsair shifts faster by faking the picture, NVIDIA achieved its goal.
It is basically the opposite of frame gen, where you want a better image and don’t care about the added latency.

I would be surprised. But I get now why you have such a negative view of these technologies.
You mix together what does not belong together.
That is like putting a Entrecôte and a Crème brûlée cooked from Jamie Oliver in a blender and then say “this does not taste good, Jamie isn’t a good chef” :smile:

Totally agree. And lots of reviewers accepted that fact. Computerbase.de did a podcast where they asked the question how they should test games in the future. It does not make sense to just compare raw FPS, when DLSS provides a superior image. Also it is hard to compare a FSR FPS with a DLSS FPS, when the DLSS looks better. Their conclusion was that describing such things in text becomes more important and FPS graphs are becoming less important. Many other outlets are coming to the same conclusion.

Yes, but the native one has that too. Not sure what that is about.

That’s crazy talk, image quality is always important.

I don’t understand why you think approximations can exceed the accuracy of what they’re approximating.

It’s slower from the game’s perspective though. It does take time to distort the image towards how reflex 2 predicts it would look had it been rendered based on more recent mouse inputs instead of immediately presenting it. We’ll see if it gets used competitively.

They’re similar in that frame gen can’t predict what will be rendered next and reflex 2 can’t predict what would have been rendered if game logic had access to more recent inputs. We’ll see if they they add reflex style inputs to framegen.

The more FPS you lose the more you save. :smile:

2 Likes

Not for competitive shooters like Valorant or CSGO.
You do know that these people turn down everything in the graphic settings and use FullHD on 25" monitors?

Agree. It will.

You are still not getting it, do you?
This does matter, even if the FPS number is the same.

Assume that we have 3 GPUs.
1 offers 60fps.
2 offers 60fps.
3 offers 60fps.

1 is upscaled by DLSS
2 is upscaled by FSR
3 is native

If you just look at the FPS numbers, you come to the conclusion that they are all equal.
But they aren’t equal, are they?
Even you don’t think they are equal.
You would say “but 1 and 2 have upscaling artifacts, that is why 3 is better, even if the FPS numbers are the same”.

So even you agree that FPS alone don’t tell you much.

I played my share of CSGO and only turned off settings that prevented me from maintaining my monitor’s refresh. It’s just a matter of preference. I did well enough that I was accused of hacking pretty frequently so I think it’s safe to say I was sufficiently optimized. Even competitively, graphics settings are turned down more because it makes targets easier to spot than for higher FPS.

I’d say your test is designed to compare image quality not performance. I’d also say that if you did want to use the test for information about the relative performance of the GPUs, assuming FPS isn’t capped GPU 3 is more powerful because native rendering is necessarily more resource intensive than any form of upscaling.

I agree that if you’re going to compare the performance of GPUs using FPS that other factors need to remain consistent. It’s only when you insist that performance comparisons must be made using proprietary features that FPS loses value.

FPS comparisons that exclude proprietary features will continue to reveal a great deal about the relative performance of GPUs though. Are you not at all interested in Blackwell benchmarks on equal footing? or is it just nVidia all the way because you’re already sold on their proprietary features?

Haven’t caught up with the thread yet, but I’ll have to agree at least with one thing. The optimization of some modern games feels off.

I feel like there are more than a few titles that optimize until their game runs well enough (4k60) on a very high end card with all the DLSS/XeSS/FSR sort of options enabled and then let everything go down from there. Which feels a little off.

I feel like it used to be easier to go high framerate at lower resolution. I’m still rocking a 1080p 120 hz monitor but games seem to scale better in resolution than in framerate. And I’ve found reports of modern games on high end cards that are expected to use the upscaling features to hit 60.

So I’m not sure if he’s right but it does feel like optimization has taken a step back and is leaning a little hard on things that were originally created as a bonus level of scaling for weaker/older cards or just pushing even higher performance like resolution (sorta) and framerate.

1 Like

That’s exactly the crux of the issue. Marketing needs to claim 4k60 and without better raster performance upscaling has been the easiest way to get there. The issue is best illustrated with consoles because their capabilities are limited and settings are hand picked by developers. So consoles were claiming 4k60 but it was far too high a resolution without sacrifices to image quality that developers were not willing to make and for good reason. Upscaling became nearly ubiquitous on console and once PC gamers started moving to 4k even the PC master race wasn’t immune to the same situation. This is why upscaling is best suited to 4k and why it’s often trash at 1080 where even DLSS-quality drops the base resolution to 720p.

It’ll probably never make sense to render everything at 4k so I think the phase of optimization that we’re currently in is the move towards incorporating these advanced upscaling techniques on a finer grain into the rendering process. This is already happening but most internal subsampling is still based on either traditional resampling or checkerboard rendering which is why we’re seeing more blur, aliasing and shimmering even when not using DLSS, FSR, etc. Games need to start implementing their own advanced upscaling techniques internally. Not only to get rid of their dependence on proprietary technologies provided by third parties to hit their resolution targets but because versions of these technologies tuned for a specific purpose and used only where appropriate will be able to significantly reduce artifacts vs generic implementations. Games vary wildly from detailed photo realism to highly stylized cell shaded scenes with vivid color, sharp lines and smooth gradients so generic solutions are faced with handling a daunting range of content.

Frame generation may similarly be incorporated directly into rendering. For example Assetto Corsa only renders some fraction of the six faces of reflection capture cubes every frame. Using frame generation to update each of the other faces would be a clear win over having reflections effectively run at a fraction of the frame rate and artifacts would confined to reflections where they would be far less noticeable.

nVidia will keep things proprietary for as long as they can but ultimately the tech will end up directly in the hands of developers.

slightly off topic. A new vid in the series of youtube exploits:

He is putting himself on a pedestal and growing an audience, which is part of doing the YouTube business.

Probably his best contribution was increasing awareness of Nanite’s cost when Epic was still suggesting that using Nanite was free.

The rest of it is highly opinionated I don’t really feel like writing a novel on those. I don’t write renderer’s but am aware that there are enough trade-offs and not one “real true answer” to go with. Also, let’s just say that a big-budget AAA game can justify more bespoke optimizations than most other games.

The performance discussion has taken a rather unhealthy turn. Last gen, a lot of stuff was baked since the hardware was not good enough to do things in real-time. Now it is, and the settings can govern the quality in those areas, too.

Settings are parameters. You could have a setting which governs how many light bounces you are going to calculate. You could calculate 2, 5, 8, 200000 light bounces if you wanted to. Of course, at some point, you probably couldn’t run the game at all with any hardware being released during your lifetime. The max settings of a game are simply the highest the developers have chosent expose to the player. The highest standard setting preset inside UE5 is called “cinematic”, it isn’t even intended to be used for games, but rather for ArchViz and the like. Most console versions use mix of low/medium settings. Many games don’t really yield notable benefits by using settings above medium/high for a lot of options.

The highest settings are not intended to be used without upscalers. Running native @ 4K is probably not going to be worth it from image quality / performance standpoint. For example, at highest settings the raytraced reflections in a puddle are detailed, but it is too costly to render at real time at high resolutions. This is where the upscalers come in, you can run the game at a lower resolution, upscale, and still have nice visuals in the reflections as opposed to running the game at medium settings at the native resolution without upscalers.

Because people take it personally when their hardware cannot run the game at maximum settings, and judge the game’s performance based on the maximum settings, it’s going to lead into developers just omitting proper maximum settings from the game and rather take the medium or high and rename that into maximum instead. The best we can then hope for is developers leaving the maximum settings behind console variables like how Ubisoft did with Pandora.

That is right.

Yes. Of course!
If all offer the exact same performance (FPS) image quality is important, isn’t it?

I hate proprietary software. But just because I hate it, it does not all of a sudden vanish.

I mostly run FTL an old MacBook and sometimes some Overwatch 2 on my PC with a 200$ 1660 Super I bought 4y ago, so no, I am not a Nvidia fanboy, not sold on DLSS, not willing to spend over 400$ on any GPU, and don’t see a reason to upgrade anytime soon.
Give me a good game that does not run or a 1000Hz OLED and I will reconsider :wink:
But yes, I am interested in performance comparisons on equal footing.
But to me, equal footing means that:

  • Not all FPS are made equal (in terms of quality and it terms of frametime)
  • proprietary features are a reality
  • the days of raw rasterized FPS performance are gone

Frametime was even overlooked back in the old rasterized days. Intel does a great job combating this issue. Last point is probably the most important point. Again, most good outlets already agreed on that. Gamers were always a little bit conservative and behind the curve. Me included :slight_smile:

2 posts were merged into an existing topic: /dev/null

Yes,

If proprietary features are in use then image quality matters subjectively and if they’re not then ensuring it’s consistent is important for making objective performance comparisons.

Not entirely but it should vanish from any performance comparison attempting to pass itself off as being objective.

Frametime is another good example of marketing being misleading because it’s mathematically equal to frame rate with no averaging. That’s why a frame at 60fps has a frametime of exactly of 1 divided by 60 seconds or 16.6 milliseconds. They’re equal in the sense that they have a linear correlation.

Great summary of where we disagree. :smile:

From my perspective:

  • objectively valid comparisons between GPUs using FPS will always be possible
  • comparisons using proprietary features that sacrifice image quality for performance will always be subjective
  • the days of raw rasterized performance being of primary importance in terms of both image quality and latency aren’t going anywhere

When ignoring reality, anything is possible lol.

Real FPS with low jitter are the goal, anything else is marketing wank.

4 Likes

Based on the four criteria of “confrontational tone,” using “capitalization for emphasis,” showing “disdain for opponents,” and “use of casual, informal language,” ChatGPT thinks this post was written by Threat Interactive :laughing:

If so, welcome to the forums!

1 Like

Don’t agree. Because it exists in reality.

No. It could be that you have 60 frames in one millisecond and then nothing for 999ms.
If you only look at FPS, this would be 60FPS.
But from a frame time perspective, your “60FPS” have the same latency as a smooth paced 1FPS. One update every second.

If people have such huge knowledge gaps that they don’t understand the difference between fps and frametime, no wonder they struggle with more advanced stuff like frame gen.

So because something exists it belongs in performance comparisons… brilliant.

Right because you absolutely must average over a full second or it’s not FPS because the word second is in the definition… let’s forget about 1% and 0.1% FPS lows too because they have to be invalid for the same reason. SMH

Fully agree lol

I’m tapping out because this has reached a level of absurdity that I’m not willing to entertain further.

Well yes.

No. It does not really matter if you take the timeframe of one second or one hour.

On the contrary, they are good tools.
In the example I made, if we calculate the 0.1% value, it would be roughly 1FPS.
That would be a way better description of reality than
“at average 60 frames per second”.
Both are true, one is misleading.

I don’t understand the hostility here - GPUs have gained a new capability in the ever-broadening of ML capabilities. It’s a wonder that we’re getting software that utilizes this new vector that would otherwise rot. To be clear - consumer GPUs have this tech because it’s already been made for the enterprise. We are ever-fed the scraps of what trickles down from the big money.

Yes raster is still important. Yes having more options for qualitative software configuration of a GPU is a good thing, so that consumers can make the most of their purchases. By all means choose to configure yours how you’d like. I specifically chose the term qualitative, because of the ‘fake frames’ argument. But if I can utilize ML tech to keep even frametimes, and reduce stutter on a low end system punching above its weight class, then that is my choice to make. If GPUs were locked or forced to use these features, then I could understand the criticism. But as of yet that is not reality.

It’s important to have benchmarking and testing that both do and do not utilize these features (with clear labeling), so that consumers can better qualitatively assess the value of these things for themselves. Not because someone in a leather jacket tells oversells its value or ‘performance’. Some people will give significant weight to raster, and cooling/noise, and ignore the rest. That’s their choice to freely make. That’s cool. Have a great day all!

3 Likes

So now averaging over larger time frames is allowed but not smaller… good example of why the conversation is over.

Oh yeah, well I think Superman would win because his cape is red.

The issues arise when flawed views of these capabilities like that framegen is good when you want a sharper image or that reflex 2 can actually reduce latency when in the traditional sense it adds latency are explained and the explanations are countered with responses that move goal posts, impossible hypotheticals and nebulous nonsense like you don’t even understand X so how can you possibly understand Y or “you need to read more”.

But all that aside even if these predictive techniques are gifted with ideal scenarios they still have glaring issues that can never be ignored regardless of how advanced they may become. Give them all the time in the world to work their magic and you’re still left with just good old raster plus inpainting.