• dual_sport_dork 🐧🗡️@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 day ago

    For the benefit of anyone else reading this, nVidia’s DLSS3 and DLSS4 absolutely do incorporate motion interpolation (i.e. fake frames) via various methods. Fake frame generation can be disabled, at least for now, but that’s really not the point. What’s more to the point is that the only headline capability added to this with the 50 series nVidia cards, for instance, is an even greater depth of fake frame generation. nVidia clearly thinks that their future is in fake frames.

    DLSS Super Resolution is the image upscaling scheme, and is now a component of DLSS3/4, but claiming that the current incarnation of DLSS is not intended to generate frames from the whole cloth is inaccurate. nVidia labeling both of these things “DLSS” probably did not do any favors to anyone’s ability to keep track of this ongoing clusterfuck. If you have a 30 series card or below you are limited to upscaling, but upscaling is not the main thing I’m griping about.

    (This is also now the case with both AMD’s FSR 3.1 and 2.0, also, which explicitly mention “temporal upscaling,” i.e. once again fake frames, in their blurbs.)

    If upscaling in whatever form looks better for you, mind you that I’m not trashing your opinion. To some degree, options exist for a reason. Some motherfuckers play their emulators with Supereagle scaling enabled, or whatever. I dunno, it takes all kinds. But silicon space that your card’s maker dedicated to AI and upscaling fuckery is also silicon that could have just been allocated to bigger or more rendering pipelines as well, and that’s exactly what they didn’t do.

    But towards your last point, absolutely yes. This is also how raytracting and RTX are being pitched now that the cat is out of the bag that RTX performance is generally trash and it also achieves very little in terms of adding usable gameplay-conveying visual information. “Oh, but now instead of calculating light maps in advance developers can just have it performed in not-quite-real-time on the GPU [wasting a shitload of silicon and electricity calculating this over and over again when it could have been done just once at the studio]! It’s so much easier!!!”

    This is deeply stupid. Miss me with all that shit.

    It seems we’ve reached the plateau, finally, where the hardware vendors (or at least nVidia) can’t or won’t bring any new meaningful performance enhancements to the table for whatever reason, so in order to keep the perpetual upgrade treadmill going they’re resorting to bolting gimcrack crap to the hardware to help them cheat instead. Maybe some day actual per-pixel real time raytracing will be viable and for certain applications that could indeed be rad, but trying to force it halfassed now is pretty counterproductive. Ditto with frame generation. I’m sure it has applications in noninteractive media or video rendering, but trying to shoehorn it into gaming doesn’t make any sense.