It is my opinion that we reached peak graphics 6 or 7 years ago when GTX1080 was king. Why?
Games from that era look gorgeous (eg Shadow of Tomb Raider), yet were well optimized to run high/ultra at FHD on RX570.
We didn’t need to rely on fakery like DLSS and frame generation to get playable frame rates. If anything, people used to supersample for the ultimate picture quality. Even upping the rendering scale to 1.25 made everything so crisp.
MSAA and SMAA antialiasing look better, but somehow even TAA from that era doesn’t seem as blurry. Today, might as well use FXAA.
Graphics today seem ass-backward to me: render at 60…70% scale to have good framerates, FX are often rendered at even lower resolution, slap on overly blurry TAA to hide the jaggies, then use some upsample trickery to get to the native resolution. And it’s still blurry, so squirt some sharpening and noise on top to create an illusion of detail. And still runs like crap, so throw in frame interpolation to get the illusion of higher frame rate.
I think it’s high time we should be able to run non-raytracing graphics at 4k native and raytracing at 2.5k native on 500€ MSRP GPU-s with no trickery involved.
GPUs are getting better, but the demand from the crypto and ML AI markets mean they can just jack up the price of every new card to higher than the last so the prices have stopped dropping with each new generation.
We didn’t need to rely on fakery like DLSS and frame generation to get playable frame rates.
If truly believe what you wrote, then you should never look into the details of how a game world is rendered. It’s fakery stacked upon fakery that somehow looks great. If anything, the current move of ray tracing with upscaling is less fakery than what was before.
There’s a saying in computer graphics: if it looks right, it is right. Meaning you shouldn’t worry if the technique makes a mockary of how light actually works as long as the viewer won’t notice.
But there’s a stark difference between optimization like culling, occlusion planes, LOD-s, half-res rendering of costly FX (like AO) and using a crutch like lowering the rendering resolution of the whole frame to try and make up for bad optimization or crap hardware. DLSS has it’s place for 150…200€ entry-level GPU-s trying to drive a 2.5k monitor, not 700€ “midrange” cards.
But there’s a stark difference between optimization like culling, occlusion planes, LOD-s, half-res rendering of costly FX (like AO) and using a crutch like lowering the rendering resolution of the whole frame to try and make up for bad optimization or crap hardware.
There is not a stark difference if you were to describe the techniques objectively and not twist it to what you feel they’re like.
There are so many steps in the render pipeline where native resolution isn’t used. Yet I don’t here the crowd complaining about shadow map size or how reflections are half res. Upscaling is just another tool that allows us to create better looking frames at playable refresh rates. Compare Alan Wake or Avatar with DLSS with any other game without DLSS and they will still come out on top.
DLSS has it’s place for 150…200€ entry-level GPU-s trying to drive a 2.5k monitor, not 700€ “midrange” cards.
Just because you’re unhappy with Nvidia’s pricing strategy doesn’t mean you should slander new render techniques. You’re mixing two different topics.
Christ, the first GPUs I saw path-tracing on were from 2010. Utterly ridiculous how games finally added it, mostly as a tiny visual detail, and it makes modern supercomputers melt.
And one of the touted benefits of raytracing was that you can just… stop. You can use exactly as much of the frame as you want, for detail, and cut off right before the frame goes to the screen. Or: don’t. So what if the bottom of the frame is fractionally less noisy than the top?
Meanwhile: fuck temporal effects. Light is not a fluid! It does not linger!
It is my opinion that we reached peak graphics 6 or 7 years ago when GTX1080 was king. Why?
Graphics today seem ass-backward to me: render at 60…70% scale to have good framerates, FX are often rendered at even lower resolution, slap on overly blurry TAA to hide the jaggies, then use some upsample trickery to get to the native resolution. And it’s still blurry, so squirt some sharpening and noise on top to create an illusion of detail. And still runs like crap, so throw in frame interpolation to get the illusion of higher frame rate.
I think it’s high time we should be able to run non-raytracing graphics at 4k native and raytracing at 2.5k native on 500€ MSRP GPU-s with no trickery involved.
We peaked when we had full hd. After all what could top full high definition… fuller high definition? That would just be silly.
GPUs are getting better, but the demand from the crypto and ML AI markets mean they can just jack up the price of every new card to higher than the last so the prices have stopped dropping with each new generation.
Intel saving us with their gpu prices, too bad they didn’t made good drivers YET
If truly believe what you wrote, then you should never look into the details of how a game world is rendered. It’s fakery stacked upon fakery that somehow looks great. If anything, the current move of ray tracing with upscaling is less fakery than what was before.
There’s a saying in computer graphics: if it looks right, it is right. Meaning you shouldn’t worry if the technique makes a mockary of how light actually works as long as the viewer won’t notice.
That’s the point
Sure, all graphics is about creating an illusion.
But there’s a stark difference between optimization like culling, occlusion planes, LOD-s, half-res rendering of costly FX (like AO) and using a crutch like lowering the rendering resolution of the whole frame to try and make up for bad optimization or crap hardware. DLSS has it’s place for 150…200€ entry-level GPU-s trying to drive a 2.5k monitor, not 700€ “midrange” cards.
There is not a stark difference if you were to describe the techniques objectively and not twist it to what you feel they’re like.
There are so many steps in the render pipeline where native resolution isn’t used. Yet I don’t here the crowd complaining about shadow map size or how reflections are half res. Upscaling is just another tool that allows us to create better looking frames at playable refresh rates. Compare Alan Wake or Avatar with DLSS with any other game without DLSS and they will still come out on top.
Just because you’re unhappy with Nvidia’s pricing strategy doesn’t mean you should slander new render techniques. You’re mixing two different topics.
Christ, the first GPUs I saw path-tracing on were from 2010. Utterly ridiculous how games finally added it, mostly as a tiny visual detail, and it makes modern supercomputers melt.
And one of the touted benefits of raytracing was that you can just… stop. You can use exactly as much of the frame as you want, for detail, and cut off right before the frame goes to the screen. Or: don’t. So what if the bottom of the frame is fractionally less noisy than the top?
Meanwhile: fuck temporal effects. Light is not a fluid! It does not linger!