Many people now days have no clue what proper behavior is, or proper response to others. Like Pvape23 for example, who exhibits this exact behavior. This is the third time he has posted the exact same message after being asked to be more civil with his tone. Instead, he has chosen to throw a tantrum cause he cant get his way.
That’s not really a fair call out. I mostly tuned out of this thread before he showed up because it devolved into snide bickering.
If the death threads are real and not just marketing…
If they were real, this is something for the police, not reddit.
BTW, remember the example I made with the none “made up frames” but “in between” frames with added latency?
NVIDIA Smooth Motion is a new driver-based AI model that delivers smoother gameplay by inferring an additional frame between two rendered frames. For games without DLSS Frame Generation, NVIDIA Smooth Motion is a new option for enhancing your experience on GeForce RTX 50 Series GPUs.
To me, those two mean the same: Fake Frames
Fake frames are made up based on data from the past. This of course is prone to visual artifacts.
“inter frames” on the other hand are calculated after two real frames were rendered.
B-Frames in H.265 are calculated based on two “real frames”.
You are free the belive/call your Netflix or BD videos are “fake frames”
I on the other hand will in 2030 enjoy a smooth picture playing Civ 7 with 60 “real frames” plus the 5 inter frames to get 360 “total frames” on my future OLED or hopefully microLED 360Hz monitor.
Sometimes in between those Civ turn loading times, I will think about how I am able to survive the added latency of 33ms (biggest possible added latency with 60FPS real frames)
As explained this is false and based on a superficial understanding of video compression that ignores all aspects of the process save for one. Inter-frame prediction is just a single step in the encoding of a B-frame and additional information from the real frame that it is based on is also used.
After prediction, the residual data (the difference between the predicted and actual values) undergoes a transformation process, typically using the Discrete Cosine Transform (DCT). This helps concentrate the signal energy into fewer coefficients. Quantization then reduces the precision of these coefficients, further compressing the data.
This is a critical aspect of video compression and cannot be just ignored. If the precision of these coefficients is not reduced then the the originally encoded frame is faithfully reconstructed and the encoding is considered lossless. The way you’re describing B-frames on the other hand excludes this step entirely. Without the benefit of residual encoding any codec would look far worse than encoding using even the lowest quality setting available. You’re describing an unwatchable video as though it’s Blu-ray quality.
Nvidia’s smooth motion is just nvidia’s take on the motion interpolation found in many TVs that looks awful.
As described, this is just misleading (= wrong by omission).
Yes, similar to NVIDIA smooth motion.
Have you watched the video in your link. Tom Cruise is talking about how you get a “soap opera” instead of a “cinematic effect”.
I happily take the soap opera effect instead of a “cinematic look” for Civ
You just keep making hollow assertions without substantiating them… There can be no residual data in the first place with interpolated frames because a reference does not exist. Same for extrapolated frames.
There is no, nor will there ever be any video encoder that ignores frames that it’s encoding nor is there or will there ever be any video decoder that ignores frames that were encoded. Video codecs do not interpolate or extrapolate frames. They encode and decode each and every frame from a real frame.
I realize I’ll never separate you from your fantasies about 5:1 or 10:1 generated frames and 1000fps. At this point I’m just counter posting to dissuade others from falling prey to the same marketing nonsense that you have.
I am not sure if you just want to split hairs over semantics or if we really disagree.
Anyway a B-Frame is not a real frame.
If the B-Frame is at position 2, it is calculated/rendered/compressed whatever based on frame 1 and 3.
If I play Civ7, and move to the right, real frame 1 is rendered and then real frame 3. Then smooth motion frame 2 is rendered. After that it displays frame 1, then 2, then 3.
Yes, that adds latency.
Yes, there might be some very minor artifacts, just like video compression has artifacts.
Yes, this is not 100% comparable, since the B existed, while the smooth frame was made up or an in-between estimation.
Will it matter? I don’t think so. Even made up future frames look pretty good in some scenarios, for some games. Smooth frame will look even better, because it is way easier.
I will get twice the frame rate and a smoother scrolling experience in Civ7. With the downside of artifacts that you will even have a hard time finding in still images.
Ohh and btw, looks like I have to wait until 2030 anyway, since Civ7 will not reach a playable state anyway before that, according to the reviews that just dropped
To get back a little bit on the topic, there is news about Threat Interactive.
I asked before:
He now does coupons for gaming chairs.
So I think it is pretty save the say, the the answer is YouTube/influencer money and not engine development or consulting
We really disagree and the semantic game being played is by Blur Busters with the word “predictive”.
B-frames (and “predictive” P-frames) are real frames because every aspect of P and B-frames is based directly on a real frame, including references to other frames. Most of the time spent encoding is searching for blocks in reference frames that are as close to identical to the real frame as possible. When the most similar block is found it’s then subtracted from the corresponding block in the real frame and the difference between the reference and the real frame is encoded as a residual. How much an encoded frame differs from the real frame that it’s based on is a choice not an estimation which is why video compression can be lossless.
Frame generation on the other hand can never be lossless because there is no real frame… that’s the whole point. It’s always just an approximation and even if given unlimited resources it can never be perfect because the definition of perfection is a frame that is never rendered. Video compression takes a real frame and decides to throw away detail in exchange for reduced size while generative AI is going in completely the opposite direction and approximating an imaginary real frame that does not exist.
Not really, analogs to nvidia’s smooth motion are found in many TVs so any game on a TV with motion smoothing serves as a preview of the effect. I’m sure nvidia’s implementation will be an improvement especially considering it will have the benefit of rendered rather than estimated motion vectors along with the depth buffer but both will still be estimations so they’ll both suffer the same issues with image quality and latency. This is why “game mode” exists on TVs, to turn off effects detrimental to gaming like motion smoothing.
The video was just a bunch of opinion… There’s obviously nothing “broken” about UE4 though or it couldn’t have been nearly as successful as it was/is. So he’s a bit full of it right from the title. He also talked a lot while not saying much.
For some reason he’s campaigning for wider adoption of cheap screen space lighting effects in favor of lumen and nanite. With the slowed advancement of GPU performance per dollar I’m sure many developers will opt to avoid using forward looking expensive effects like lumen and nanite but nobody is being forced to use UE5 and even those to choose to use it are still free to either use features like lumen and nanite or not. Seems like all his issues are non issues.
If there’s one thing I would take issue with it’s his claim that ordered dithering is invisible at 60hz. Maybe at 4k if you’re not sitting very close it’s not very obvious… but at 1080p it’s definitely visible.
That is a logical fallacy, Windows I would say is pretty horrible in places, yet it is on 85%+ of all desktops.
It is possible to have a bad system that works good enough. Proper game engines with the latest bells and whistles are really time consuming to implement, and even harder to get performant enough to deliver 100+ FPS when all eye candy is enabled.
No software is perfect, neither is UE4, and yes it does not surprise me that some parts of it may be horribly broken. That does not have to mean the entire engine is crap though.
Not at all. Broken implies that something doesn’t work or isn’t fit to task. UE’s past and continued success does prove neither is the case.
If I have an awesome engine that, say, use Bubble Sort as the main sorting algorithm, because “Optimization is for chumps”, and for 99% of my user base that is all they need…
Does that mean my sorting algorithm is busted, or not?
(Oh and in case you missed it, Bubblesort is one of the worst performing sorting algorithms out there)
Not. Especially considering UE supports a range of alternatives. Even in your example only the “main sorting algorithm” is “the worst performing”, if it’s not suitable then choose an alternate, import a third party library or write your own.
UE is not broken, his video title was just clickbait and fuel for boring semantic “arguments”.
Much larger Textures, improved geometry, expensive but minor graphical improvements…
Diminishing returns, but there you go.
Turn details down, will still look like 2013 or better.
You are completely missing the fact that quite a few games target 30 FPS on Console and UE has optimization for that.
“Just use 3rd party mods” dude, do you even realize what you are saying? You are saying my broken engine isn’t broken because I can just call Pimp My Engine™ to come fix it right up.
You’re failing to grasp the fact that UE is an open source software development tool… not an end user product. If you think the use of a third party library or being able to configure the engine to produce artifacts equates with it being broken in any way then you are completely out of your element.
No. It really is not. If it was, I could download, modify and republish my changes to the source code, without Epic having a say of the matter.
The source code is available, but you still need to sign away rights to contribute to it. And claiming that the core package is not broken because a third party shop fixes it is still about as dumb as claiming a car isn’t broken because there are mechanics. #worksasintended
Whatever… I have zero interesting in engaging in one semantic argument after another.