A while ago I watched DigitalFoundries video on DLSS3 and one particular thing struck me as odd. Check out the timestamped part of the video (I would still recommend the whole video though):
Thus, it appears even DLSS2 has a better image quality the higher the FPS it can deliver. As a result, DLSS Perormance Mode 2160p@60fps will have less image quality than DLSS Perormance Mode 2160p@240fps. I’m wondering how DLSS Quality Mode 2160p@30fps would compare to LSS Perormance Mode 2160p@240fps.
This is something entirely new and I have not heard that before, which is why I wanted to share it here. Furthermore, I’m wondering if this is the same for FSR and XeSS.
One reason how I would be able to explain this is that one or all of these statements hold true:
- Given that DLSS uses motion vectors, i.e. it remembers previous frames, it is easy to reconstruct an image if the difference between frame
n
and framen-1
is as small as possible - Maybe DLSS uses all frames of the past
x
seconds, which would result in a higher framerate delivering more data.
Do you have any other ideas or input?