emoose / DLSSTweaks

Tweak DLL for NVIDIA DLSS, force DLAA on DLSS-supported titles, tweak scaling ratios & DLSS 3.1 presets, override DLSS versions without overwriting game files.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question regarding upscaling quality

Azulath opened this issue · comments

Now that we can set an arbitrary render resolution for a given target resolution I was wondering what contributes most to the final image quality.

So, assume two equally sized screens with one running at 1440p while the other runs at 2160. If someone would use DLSSTweaks and set both render resolutions to 960p would the image on both screen have the same quality due to the same render resolution? Would it look better on 2160p since it has a higher target resolution? Or would it look better on 1440p since the DLSS algorithm has to extrapolate less pixels?

commented

It will look better at higher resolution as it can populate more upscaled pixels, so you get more clarity and less aliasing. Just like if you were compering native 1440p and 2160p. Which is why 1080p -> 2160p works well with DLSS. But keep in mind you will always get better image when upscaling higher base resolution. It's also worth mentioning that DLSS doesn't care about aspect ratios as it smartly masks/filters them out during the upscaling process, so you can use custom scales like 0.8 or 0.9 just fine if you desire much higher quality (nice for low target resolution like 1080p).

@PweSol are you sure? I have heard an entirely different statement from someone else, claiming that the image will degrade in relation to the number of pixels it needs to guess. Have you tested this yourself or do you know of any article/video where this is analysed?

commented

@PweSol are you sure? I have heard an entirely different statement from someone else, claiming that the image will degrade in relation to the number of pixels it needs to guess. Have you tested this yourself or do you know of any article/video where this is analysed?

What you said is generally true for upscaling AIs, but DLSS is a bit different. I work on such AIs myself, professionally, so my source is experience. I am not Nvidia engineer, but I've tested and developed more than most people get to know, hence I believe my words here have some merit. You can always test yourself instead of listening to strangers on the Internet, whether it's me or the other guy you mentioned.

commented

The main confusion comes from this:

DLSS 2.0 is NOT an upscaling AI. It is a form of temporal anti-aliasing upsampling. Simplified; It uses previous frames to recover data that are no longer present on the current frame. It works universally on any given resolution.

General upscaling AIs/techniques are usually trained on a dataset of degraded images of a certain resolution or now more often range of resolutions. Then the neural network learns to recreate the target source images, typically 4x of scale (less often 2x, 8x, or 16x) resolution. This is where the upscaling AIs have their strength and weakness; they work the best at the resolutions they were trained on. As such, using them on lower or higher resolutions than that will result in worse quality results.

Going back to DLSS - it doesn't suffer from what I just explained. The similarity between those is that DLSS 2.0 is enhanced by this neural network leaning procedure, hence it achieves superior results compared to other temporal aa upsamplers like the original TAAU, or now predominantly TSR and FSR 2.0 (XeSS is also enhanced by AI just like DLSS 2.0).

DLSS 2.0 gets the best of the both worlds while having the least negatives. The main negative is it being incapable for generating new data to enhance the image like the AI upscalers do. This primarily results in lower resolution texture staying low-res, hence sharpening is often used to combat this.

It is also worth mentioning DLSS 1.0 was indeed AI upscaler (spatial image upscaler to be precise). It used two stages during its upscaling process, but I won't get into that here, since as you for sure know, the results weren't good anyway.

I hope that clears it up.

commented

first a 960p image upscaled to 4k and 1440p.... the 4k image or frame would look worse since its working with less detail to begin with and there are diminishing returns with image quality output at 4k and its the balanced setting. Some games it looks ok but its clearly inferior. It has less detail and is more likely to have errors/artifacts but the 1440p image is significantly less pixels to fix plus the jittered frames would only be able to drag so much detail (that is accurate) to the higher resolution.

personally i like dlss 2.x upscaling 3200x1800 to 4k. Its 80% of 4k so image quality is excellent and you still get a nice boost in performance. I dont have an issue with quality but i have a 4090 so ill render as close to 4k as possible while getting 120 fps or if its really demanding ill settle for 90.

thats not exactly true in regards to dlss 2.x but instead of typing for 20 min OP should read the wiki pqgge and nvidias own documentation. its a quick read ut to simplify... dlss 2.x is fancy TAAU but instead of deleting errors giving the image a blurry look dlss 2 uses motion vectors along with 2 low res frames jittered with the previous upscaled frame and thats it besides some complex math and if needed some extra training which you can preview with the dev dll