Is Threat Interactive's (YT) game optimisation channel a long con?

Great - so it sounds like you understand where you stand on utilizing the technology. Carry on!

I personally have seen games with deliberately ‘muddy’ art styles where noise introduced does not impact my qualitative enjoyment of said games. Some others with ultra-sharp imagery and geometry where any bit of introduce noise detracts, and in those instances I can turn down settings to get things humming along on my 3080 for maximum enjoy.

None of these things invalidate your thoughts on the matter - again quantitative and qualitative characteristics are not directly comparable on a 1:1 basis. You do you!

Never said so.

Hope all is well with you.

It is

Not really, not even in the “traditional sense”. It might add one ms in the “traditional sense” to add the warp. But that is about it.
BTW that one ms I pulled out of thin air, based on Valorant having 2ms total latency with warp.

Again, you seem to be confusing and mixing together technologies that have nothing to do with each other.
Reflex 2 and DLSS4 are not the same thing!
Heck I think that is how we ended in discussing Reflex 2, when we first started at framegen :joy:

Compare DLSS off with DLSS4 in terms of image quality.
I would argue that DLSS looks way better.
Scratches on the table, coffee cup on the right, food cup.

Nvidia did not provide any information on that comparison, so I take it with a grain of salt. I am sure we will see real tests soon.

Did not read the whole thread so dont know if this point has been raised yet, but given the heavy reliance of secret sauce that the video game industry is hooked on, I am not surprised that UE and other game engines have a lot of performance left on the table.

If you want an insight into lessons learned in this industry, I recommend Mike Blumenkrantz blog Super Good Code, and how a single developer single handedly managed to improve the Linux GPU stack by up to 300% just switching to slightly more efficient algorithms.

Mike even managed to make the Radeon native OpenGL drivers in Linux look bad (and subsequently be improved). So it is certainly plausible that this channel has found optimization paths for the UE engine.

4 Likes

I’ve been better… I find it difficult to carry on with with this level of nonsense.

You set a base rate of 60fps without frame gen and both real and generated frames to take the exact same 16ms (which is already crazy) and now you want to claim a 0.1% lows of 1fps in that scenario even though not one of your example frames jumps by 1000ms… to explain how I don’t understand frametime and you think I “seem to be confusing and mixing together technologies”… dude…

I am becoming more convinced that you are indeed a fish lol.

1 Like

Your reading comprehension sucks then, as I interpreted that as them explaining an extreme example of stuttering.

If you generate a frame every 0.0016666… second the first 0.1 of a second, that’s on a rate of 600 frames per second.

However, if you can only go 600 FPS for 0.1 seconds and then having a holdup for 0.9 seconds, then you will get 60 FPS with extreme stuttering. That’s just fact.

Games are best when rendered on a set period, smoothness factor is a thing and with modern 144Hz+ monitors, this is more important than ever. Once upon a time 60Hz was the gold standard for rendering, just like DVI connectors were the gold standard monitor connector for PC. The world has moved on from that.

1 Like

I mean I am very sorry to tell you yet again that you are mixing stuff together that does not belong together. Now you are even quoting stuff from me that does not belong together.

I try to have a good faith discussion, so I will try one last time.

The first thing you quoted, was about a theoretical frame gen and such how frame gen offers you a “better” picture by doing doing what video codecs do, creating B frames by looking at what happened before and after. That of course ADDS latency, because you need to delay the output of the after frame.

The second thing you quoted, was about how FPS does not tell you the whole picture because it ignores frametimes. We even mostly agreed on that one and you made the good point that it is hard to hide bad frametimes in 0.1%.
I made an example of a average 60FPS but 0.1% only 1FPS by having 60 frames in the first milliseconds and then nothing for 999ms.

Even if he has (which I have some doubts), compare that to how Mike reacted.
Did Mike made YouTube videos where he trashed Linux GPU developers?
In a tone to create some childish fanboy following?
And did he say “you can soon buy my super duper Linux GPU tool”?
Or that he needs money to work on this?
Or how other Linux devs were mean to him in the forums or some mailing list?

Unreal Engine is on gh.
If you have any input on how to improve performance, you can open up an issue there. This is also something that would look great on his CV, no matter if he applies at Epic or anywhere else.
If his gh issues get closed without any good reason, we can discuss that.

It almost sound like you or someone you know are working on UE for a living, either as a third party dev or as part of a team inside a AAA studio. I would try not to be too offended by the bombastic claims. It is healthy to be sceptical of these claims; it is equally healthy to be aware of the fact that no software is perfect and mature projects often take a lot of time to perform big and sweeping changes.

Yes, the content creator in question is being a whiny little b**** about it, that’s part of the current YT popularity contest / trending algo shinies. I agree it’s not the nicest way to present concerns. Mike has the better approach here for sure.

These days that means less than you think - a project needs to be open to contributions or it’s basically just a window to display your source code. AFAIK UE require quite a few hoops to accept patches, but I am not involved specifically in the UE coding so I might very well be wrong about this. It has been years since I last checked.

Also, the perf updates could come with hidden downsides, so even if the patches are sound we could be looking at a situation such as the Linux kernel realtime patches. Those took roughly 10 years to fully merge into the kernel codebase, and the work is still not completely finished though now it is something like 20 patches left out of the original 300 or so. Not suggesting this is nearly as large, but…

1 Like

No to all of these claims. But I have worked and still do to some extend work in software.

The only hoop you have to jump does not apply to patches or issues; you have to link an epic account to get access.

Exactly, that is why I trust him even less.
Not saying that it can’t be that he has a 300x the performance trick that can be implemented without any downsides. If that is the case, he can open a pull request and Epic would be more than happy to implement it. He can show that pr to his next potential employer. UE runs better. Games run better. Win, win, win.

Instead he chooses to whine on YT. I don’t know why he does it this way.
Immature?
Youtube money?
Looking for VC?

1 Like

I have no issue giving you the benefit of the doubt in that you’re not operating intentionally in bad faith but I do think that your posts are a reflection of your chaotic nature making them hard to follow to the point where they could easily be taken as being in bad faith… I couldn’t imagine living a day in your head.

As far as mixing stuff together that doesn’t belong, introducing frametimes to the conversation is a good example. Frametimes aren’t a concern with the topics under discussion because none of these new technologies suffer from issues with erratic timing. In fact most issues with frametimes originate on the CPU side but I suspect you already know that.

I also think you know full well that we both understand frametimes. It’s not exactly a difficult concept. I just think you’re argumentative which is why we end up with exchanges like the following:

I refer to “a frame at 60fps” equating to a frametime of 16.6ms and you counter with “No” and an absurd scenario where frames are rendered at 60000fps for 1ms… Yes I mistakenly thought you were referring to your previous example because it never occurred to me that you’d be drawing attention back to a scenario that I considered absurd.

But lets try and stay on topic and get back to the main reason you keep accusing me of mixing topics. Because I talk about Reflex 2 and framegen as though they’re similar even though they’ve been marketed for different purposes.

Here’s how I explained they are similar.

If that’s not clear enough then imagine Reflex 2 having to shift a frame significantly to the left because the mouse is moving quickly in that direction. How does Reflex 2 know what to draw to fill in the missing right side of the image? Now imagine framegen with a scene quickly panning left. How does framegen know what to draw to fill in the right side of the image?

Yeah, reflex 2 and framegen are not exactly the same thing. Yet in the specific way that they are both faced with the problem of having to fill in parts of images with predictions, they are the same and suffer many of the same issues as a result.

I think I’ve made my points clearly and I think continuing to conflate framegen with with how video codecs work demonstrates your lack of understanding the shortcomings of the generative technologies discussed.

Others seem to follow just fine. But thanks for the argumentum ad hominem.

It makes it up

Again, I have not looked into how Nvidia does frame gen but I already explained one possibility of how it could be done in a video compression like fashion (that adds latency by delaying an output. Which of course is necessary to render in-between r P-Frames instead of just making up 3 frames in the future).

But why even bring it up?
Because you made fun of @EniGmA1987 and blur busters.
Again, I don’t know Nvidia implements frame gen.

I only know that you can’t dismiss blur buster for making a very good argument for “in between” frame gen. And I gave you a perfectly good example even with timings to understand that.
I am very happy to discuss the example I made, if you have any criticism for that.

I do apologize but at the same time I’m not sorry because I’m just being faced with having to reiterate the same arguments.

Right… as does frame generation… and therein lies the reason that I’m criticizing these technologies as suffering from the same issues.

This is all right in the marketing. Scroll up slightly from the point linked in this press release and you’ll see the following: “Working in concert, our new hardware and software innovations enable DLSS 4 to generate 15 out of every 16 pixels”. Same with Reflex 2… it’s right in the marketing that it uses inpainting.

These “made up” aka inpainted aka generated pixels are all based on data from rendering (including depth and motion) so rendering remains as the ground level source of both detail and responsiveness.

It’s this reliance on “made up” content particularly at lower resolutions and refresh rates that causes complaints of blurring, ghosting, poor responsiveness, and other artifacts associated with these predictive technologies.

If you do look into frame generation and video compression then you’ll find that you’ve been putting things together that don’t belong.

Nvidia wants devs to implement framegen so an implementation guide is publicly available which is far more direct than trying to piece together how everything works from marketing material. Even if you have no interest in actually implementing framegen by just looking at the inputs you can get a pretty good idea of how it operates and its limitations.

Despite the fact that both video compression and generative AI can both be described using the word predictive, the former just isn’t predictive in same sense as the latter which is why the Blur Busters article is grossly misleading. The decompression side of video codecs never makes up frames, it decodes every frame regardless of type (I P or B) and then displays them. The data used from reference frames by “predictive” P-frames is explicitly encoded not estimated, guessed, or made up. The same holds for B-frames. Blur Busters is using video compression to create confusion and conflate generated frames with rendered frames leading to misconceptions that I’ve taken issue with, like that absurd levels of frame generation aka made up frames is a solution to issues that arise from using made up content in the first place.

2 Likes

No, or at least not necessary. I could also just delay the output and make up in between frames. Just like h.256 does. Again, I don’t say that DLSS4 does it this way!

Nobody is denying these issues.
But even with the current “I make up 4 frames in the future” DLSS 4 frame gen, most reviewers agree that games, for example Cyberpunk running at 70FPS native with 30ms input lag, feel way worse than frame gen enabled 210FPS with 33ms input lag.

I can’t test that for myself, simply because I don’t own a 5090, don’t own a 240Hz OLED, nor do I care about shit games like Cyberpunk.

In my opinion, there are way worse offenders for that than DLSS. I don’t disagree with TI on that one, modern games look bad in my opinion. But the consumer does not care about that blurry mess. Everybody seemed to love RDR2.

Not really, you are just talking about frame gen that makes up future images, and blur buster and I are talking about frame gen that makes up in between frames.

Just because it confused you, does not mean that it is confusing for everybody :wink:
No but seriously, you might got confused because you read it in the DLSS context.
For most other people it seems pretty clear that they are just talking about a far distant future technology. DLSS4 is at best a primitive first step in that direction. And even the monitor hardware will still not be there for years.

But yes, in reality.

If I were confused then counter arguments would have been direct not circular.

Ooooh, this reminds me… Wonder if things like AA and DLSS will move in to the monitor one day, instead of being a part of the GPU.

Brings a whole new meaning to the concept of Gaming Monitor :grin:

But even with the current “I make up 4 frames in the future” DLSS 4 frame gen, most reviewers agree that games, for example Cyberpunk running at 70FPS native with 30ms input lag, feel way worse than frame gen enabled 210FPS with 33ms input lag.

Der8auer’s benchmark of this had 286FPS avg, 61FPS 1% lows on DLSS Performance, and he states it “didn’t really feel like 286FPS”. 15FPS real frametimes → 70ms input lag in render time alone, several times per second, is god awful. Did +3ms come to you in a dream?

1 Like

Lol careful. I brought up how it looks like things may progress to moving some of the tech into monitors to get by bandwidth limitations and improve image quality and the very idea of anything moving out of the GPU was highly offensive to some people in this thread.

Watch a DF video or read computerbase review.

Is that tone really necessary btw?

He states that the 1% lows only doubled, so it feels not multiple times faster (only double).
Just to put your comment into perspective.

… how in the fuck is this something worth issuing death threats over? Even IF he was completely, 100% wrong… jesus christ this is just poor coding in fking video games we’re talking about here.

And since I’m here anyway, I haven’t seen this posted. Yes, frame gen IS interpolation, it DOES add latency, the frames ARE fake and crap. The single benefit (other than larger FPS number for marketing) is “smoothness”. If that’s important to you and worth the artifacting and latency increase, more power to you, I’m not here to tell you your preference is wrong. But please stop huffing nvidia’s farts and pretending it’s something it’s not.

edit: forgot the link https://www.youtube.com/watch?v=B_fGlVqKs1k