So I just signed up to the forum. I do like your videos even if I don’t understand everything that’s said. Anyway, with the new NVIDIA launch and them talking about their use of AI for frame generation I started thinking about the video where Tom Peterson goes over Intel’s use of AI for frame generation. Can anyone explain what the difference between the two is and the pro’s and cons? It seems like NVIDIA is trying to “brute force” frame gen and intel is trying to “intelligently predict” it. There’s also the question of combability since intel’s is in the CPU and NIVIDA is a GPU. Is it likely the two systems will synergize well? Also, if possible, which system would be better for PCVR. Thanks!
Not sure if everything of that article is still valid:
The “pro” is your FPS number goes up.
The con is that input latency stays the same (if you have 30 real FPS/33ms frame time, then that is your latency), image sharpness goes down the drain and you get an “artists impression” of what the game should look like.
in a nutshell:
all the FG , Ai, and DLSS (1,2,3,4) from nVidia and AMD and Intel is :
Reducing your screen set resolution (example 4k) and picking elements to reduce resolution (2560, 1080 and 720) depending on quality preference, but still keeping you screen size setting (4k) in config, but rendering much lower.
The frame generation is taking a virtual frame at a reduced resolution with maybe just changes in lighting, motion, and background and inserting the average of the prior frame and the next frame, then taking information from the generated frame and staging the next instance of the following virtual generated frame, also at a lower resolution, but carrying forward items of motion and lighting,
Essentially, getting higher frames by selectively reducing the resolution.
AKA
a gimmick.
I plat at 4k and 6k, with no FG, and no DLSS, because :
a. I paid for a 4k and a 6k monitor, so I’ll be damned before I run it at 2560 or 1080
b. visual fidelity is abysmal to those of us that notice.
all these gimmicks are "intelligent’ ways to lessen the look of reducing the resolution. This is important in the card industry so that it falsely appears to have significant performance boost between generations.
This came about because running ray tracing had such a huge impact on FPS that its used to :
a. make it look like each generation of cards is an amazing upgrade and a must
b. continue the push to have folks by each subsequent generation of card
c. Game titles can look quite amazing even with pre baked lighting
d. Ray tracing has a performance significant, performance impact, but Path tracing now has an even greater performance impact as well as the visual introduction of significant noise to the scene.
e. De-noising take more cycles and processing power than ever before, eliminating any gain from the architecture advancement of the new card.
Does this help clarify?
again, just my humble opinion.
J
PS.
Rather than give up visual fidelity and frames, I elected to force mGPU when possible and get maximum visual fidelity in each game I covet. Because mGPU is the only way to keep FPS and gain max visuals, BUT mGPU is SOOOO complicated and a house of cards and NOT the mGPU everyone promised a decade ago in DX12 and Vulkan, so obviously not the perfect answer.
I use my home GPUs for algorithms for medical imaging and data, but I also use my personal PC (with multiple GPUs) to game, I can uniquely keep a foot in both camps.
DLSS / SSR being the interpolation of refilling screen, from a determined res/quality to actual
FG is interpolating frames, to be injected, in between actual rendered frames
BOTH mechanisms work off reduced res, in efforts to reduce the workload [on GPU]
These are intended reliefs, in combating GPU taxation, whether be bigg res a/o ray-tracing on
Pending on game/setup, has greater chances of unpleasant [unplayable] presentation(s)
Whether it be unwanted shimmer / chroma noise / janky-misdetailed objects / …
I’m sure plenty of YTs, have done assessments of DLSS/FSR, for various games
A 1080p imaged that is upscaled to 4k with DLSS will mostly likely look better than a native 4k image.
FSR on the other hand has some very annoying smear in games like Black Myth Wukong.
It really depends on the game, the setting, the implementation…
Digital Foundry made some great comparisons.
Jensen, that you?
lol no. you can’t reduce the input data and then “ai”-fake your way back to BETTER than the original. Theoretically, a model trained on ALL of the effectively-infinite possible frames for a specific game could match the original - at a similarly near-infinite computation and power cost. upscaling is all about attempting to get close enough that people don’t notice.
Yes you can. Of course you can. DLSS takes the last 5 frames into consideration.
Now, if you have a fence in the distance, of course a upscaled 1080 that has the information of the last 5 frames, will look better than a native 4k image. It will look sharper and have less aa.
This isn’t a secret, go watch some DF videos.
Again, DLSS isn’t the same as these cheap TV upscalers we used to have, where they added some artificial sharpening.
I am not saying that DLSS is our savior and I rather have developers optimise instead of using frame gen to get decent frames. But that is not in my hands.
it has 5 now incorrect frames of a fence, at 1/4 the detail, all a bit different from each other and the information it should be showing now. the game engine decides what is correct, not the driver. it can only guess. if you prefer the inaccurate potentially extra detail or sharpness that the driver put there all on its own, that’s you’re prerogative, but it’s still incorrect. just like those sharpening filters you mention, or the “blow out the color saturation” image ‘enhancement’ for another similar analogy
No. Lets leave out frame generation to keep it simple.
This is a comparison with the same resolution
Hope this makes it clear, why a 1080 image with DLSS CAN provide a better result than 4k native.
It can use the last real 5 frames at 1/4 the detail of 4k native to produce a better result (not what is happening in this picture! In this comparison the resolution is the same. This is just so you get the basic idea.)
Again, there are many DF videos to see it with your own eyes.
No you can not.
There is a reason why in movie production people lug around big cameras with precision optics and huge sensors (Arri 65 is 54.1x25.5mm, 14 times larger than an iphone 14 camera). Fundamental principle of the world:
Garbage in = Garbage out
With this entire debate, we are back to the time when Nvidia rendered games at lower resolution in order to be reviewed more favorably, straight back to 2003!
That is a totally flawed analogy that has no connection with what we are talking about here.
You can say a 1080p native image is garbage, but then so is a 4k native image.
Or that both are great. It does not matter. And again, you don’t even have to use upscaling.
Just look at the picture I posted. Are you seriously claiming that the picture on the right does not look better than the picture on the left?
It is the perfect analogy.
When rendering at full resolution (1080p for example), there is a reasonable expectation of getting the same image on entirely different machines. The same applies when upscaling using mathematical procedures (Nearest-neighbor, Bilinear or Lanczos for example).
While these algorithms may run afoul of unfortunate realities of physics.
You can say a 1080p native image is garbage, but then so is a 4k native image.
No.
My point is native being superior to any scaling since it is deterministic. 1920x1080 pixels are calculated and displayed or 4096x2160 are calculated and displayed, all that changes is computational time required.
Source
Are you seriously claiming that the picture on the right does not look better than the picture on the left?
Three issues:
-
Lacking context: Is the left side native or also upscaled? If so, what algorithm was used?
-
Details: The crappiest upscaler can convincingly hugify a Utah teapot, as long as there are no details to preserve.
-
Latency and motion: When the upscaler does its thing, it takes time since it won’t be O(1) complexity. And while at it, it may hallucinate in detail, which can cause motion blur, ghosting or jittering
From GN’s DLSS2.0 in Cyberpunk2077 video:
Which one is native, which one is guestimated?
The Bottom Line:
I paid for all the pixels in my monitor, I paid for the whole PC, I am going to use all of it! Gaming went through the piss-filter era, it looked strange but it was crisp and did not sacrifice meaningful performance.
Now Nvidia has reached some local maximum in hardware they are willing or able to sell, so they reach for the cheap option: Fix it in post, meaning software. AMD and Intel unfortunately have not called them out on their BS and instead followed suit. I hate it!
- Lacking context: Is the left side native or also upscaled? If so, what algorithm was used?
Both are not upscaled, and both are the same resolution WQHD.
Now with that out of the way, I would like to ask you again, which image looks better?
Fix it in post, meaning software. AMD and Intel unfortunately have not called them out on their BS and instead followed suit. I hate it!
Why you think that is?
You seem to belive we are in a disagreement, but I would guess we agree on almost everything. You are just mixing together stuff that does not belong together.
- Am I a member of /r/fucktaa? Yes
- Do I dislike the strange ghosting and artifacts in Ghost of Thusima? Yes
- Do I belive that modern games are rubbish and unoptimized? Yes
- Do I belive that instead of focusing on frame gen and other shenaningans, we better would spend that in raw power? Yes, but I also realize that people are not willing to pay 5k for a GPU
- Do I belive that NVIDIA is no longer interested in gamers and that Ai stuff is just some leftovers from the datacenter business? Yes
- Do I belive that NVIDIA has no real competition, because AMD stuff is more expensive to produce and NVIDIA is doing only the bare minimum? Yes
But at the same time, I do realize that an artificial and corrected frame can be better than a native frame.
Which one is native, which one is guestimated?
To me, they all look like boring crap. Based on AA I would guess C?
Heck, I think Cyberpunk is the perfect example of what is wrong with the gaming industry.
Besides the cyberpunk setting (which I love) the art style is total trash, there is sometimes nice lightning which looks awesome in some perspectives, but mostly it looks like trash with horrible animations. The gameplay is also rubbish mixture of a bad shooter, a bad GTA and a bad RPG.
All that while having a less believable crowd or police Ai than GTA vice city.
So no, I am not one that is spending 1k on the newest GPU. I rather play FTL or Elden Ring which in my opinion look both way better and are more fun to play (even at a horrible framerate).