• @mindbleach
    link
    English
    332 months ago

    Chasing photorealism has been unsustainable since before MW2 came out. You could see where that line was headed. The answer has always been procedural artwork - not randomized, just rule-based. Even if an entire desert gets away with four textures for sand, those shouldn’t be hand-drawn and manually-approved bitmaps. They should not be fixed-resolution. Let the machine generate them at whatever level of detail you need. Define what it’s supposed to look like.

    This is how that “Doom 3 on a floppy disk” game, .kkreiger, worked. It weighs 96 KB. It doesn’t look like Descent. It has oodles of textures and smooth models. Blowing a few megabytes on that kind of content is a lot easier than cramming things down and a lot cheaper than mastering five hundred compressed six-channel bitmaps. Even if every rivet on a metal panel was drawn by hand with a circle tool, ship that tool, so that no matter how closely the player looks, those rivets stay circular.

    You can draw rust and have it be less shiny because that’s how rust is defined - and have that same smear of rust look a little bit different every time it appears, tiled across a whole battleship. Every bullet ding and cement crack can become utterly unremarkable by being completely unique and razor-sharp at macro-lens distances. You don’t hire a thousand artists to manage one tree each, you hire a handful of maniacs who can define: wood. Sapling, tree, log, plank, chair, wood. Hand that to a dozen artists and watch them crank out a whole bespoke forest in an afternoon.

    • @[email protected]
      cake
      link
      fedilink
      English
      32 months ago

      How do you think modern games are made? Procedural generation is used all over the place to create materials and entire landscapes.

      • @mindbleach
        link
        English
        62 months ago

        But never ships clientside.

        These tools have been grudgingly adopted, but only to make ‘let’s hire ten thousand artists for a decade!’ accomplish some ridiculous goal, as measured in archaic compressed textures and static models. The closest we came was “tessellation” as a buzzword for cranking polycount in post. And it somehow fucked up both visuals and performance. Nowadays Unreal 5 brags about its ability to render zillion-polygon Mudbox meshes at sensible framerates, rather than letting artists do pseudo-NURBS shit on models that don’t have a polycount. And no bespoke game seems ready to scale to 32K, or zoom in on a square inch of carpet without seeing texels, even though we’ve had this tech for umpteen years and a texture atlas is not novel.

        Budgets keep going up and dev cycles keep getting longer and it’s never because making A Game is getting any harder.

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      You propose an interesting approach. I just wonder how the individual streaks of different rust interact with typical graphics pipelines. You can certainly ship a generator, but then for rasterizing the image the texture still has to be generated and shipped off to GPU memory to be used in shaders, won’t you blow through VRAM limits or shader cache limits by having no texture reuse anywhere?

      • @mindbleach
        link
        English
        22 months ago

        Any game with texture pop-in is already handling more data than you have space. “Rage” famously had unique textures across the entire world… and infamously streamed them from DVD, with the dumbest logic for loading and unloading. You could wait for everything to load, turn around, and it would all be blurry again.

        Anyway if you’re rendering ten zillion copies of something way out in the distance, those can all be the same. It will not matter whether they’re high-res or unique when they’re eight pixels across. As Nvidia said: if you’re not cheating, you’re just not trying.