• andrew_bidlaw
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 months ago

    I mean… We had Daggerfall and Minecraft with procedural generation under the hood, and many others, for a very long time. Why we’d need a model that ‘learns’?

    I ask about in-game applications, not the use of LLMs in production.

    • Fubarberry@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      2 months ago

      Obvious application is having NPCs that you can actually talk with. Not just about one or two topics that they have a pre-recorded voice line to tell you about, but about anything at all. And with AI speech generation as well, you could have them somewhat realistically talk back to you.

      You could also have an LLM working as a kind of DM, coming up with new quests with stories and some content variety. A lot of games have repeatable randomized missions, but this are very formulaic and feel very repetitive after you’ve done a few. There’s usually no story, just a basic combat grind. A LLM could come up with actually interesting randomized quests, like a murder mystery where the murderer had a motive and you can legitimately question the suspects about anything they know.

      • andrew_bidlaw
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 months ago

        I read that sentiment about quests a lot and have something for it myself, but I find it questionable.

        Formulaic is what makes the quest work with the system. It should, just as a raw code, have a list of triggers for events, responses, all nailed to the system and the world that already exist. It needs to place an NPC with a question mark that has a fetch quest that updates your map\journal when you get it and correctly respond with an award when conditions are met. That’s on a basic level.

        The LLM to create such complex and strict manipulations should be narrowly guided to generate a working fetch quest without a mistake. We’d basically kill off most of what it is good at. And we’d need to build pipelines for it to lay more complex quests, all ourselves. At this point it’s not easier than creating a sophisticated procedural generation engine for quests, to the same result.

        Furthermore, it’s a pain in the ass to create enough learning material to teach it about the world’s lore alone, so it won’t occasionally say what it doesn’t know, and would actually speak - because to achieve the level of ChatGPT’s responses they fed them petabytes of data. A model learning on LotR, some kilobytes, won’t be able to greet you back, and making an existing model to speak in-character about the world it’s yet to know is, well, conplicated. In your free time, you can try to make Bard speak like a dunmer fisherman from Huul that knows what’s going on around him on all levels of worldbuilding young Bethesda put in. To make it correct, you’d end up printing a whole book into it’s prompt window and it would still spit nonsense.

        Instead, I see LLMs being injected in places they are good at, and the voicing of NPC’s lines you’ve mentioned is one of the things they can excel at. Quick drafts of texts and quests that you’d then put into development? Okay. But making them communicating with existing systems is putting a triangle peg in a square hole imho.

        On procedural generation at it’s finest, you can read about the saga of the Boatmurder in Dwarf Fortress: https://lparchive.org/Dwarf-Fortress-Boatmurdered/Introduction/

        • Fubarberry@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 months ago

          I don’t have time right now to write a full proper response, but for quests I would imagine starting out we would still use traditional random generation the bones of the quest, but use an LLM to create the narrative and NPC dialogs for it. Games like Shadows of Doubt already do a good job with randomly generated objectives, but there’s no motive for the crimes. Just taking the already existing gameplay and using LLM to generate a reason why the crime happened would help with the atmosphere a lot. Also, you can question suspects and sometimes solve the case by them telling you they saw [person] at [location] at [time], but I think an LLM could provide actual witness interrogation where you have to ask the right question, or try to catch them in a lie.

          As far as the mechanics for LLMs to actually provide dialog, I expect to see some 3rd party AI startups work on it. Some kind of system where they have some base language packages that provide general knowledge and dialog abilities, and then a collection of smaller models/loras to specialize. Finally you would have behind the scenes prompting that tells the NPC who their character is, any character/quest specific knowledge they have, their disposition towards the player, etc. I don’t expect every game company to come up with this on their own, I suspect we’ll get a few individual companies offering a built solution for it starting out, before it eventually becomes built into the larger game engines.

    • AnIntenseMoist@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I forgot what game it was for, but some guy implemented an actual conversation system with in-game outcomes using AI.

      I could also see more dynamic questing systems, character behaviors, even crafting systems based around the tech. But that requires investment and effort to make the tech work. Not exactly why studios might be investing in AI in the first place.

    • Squid1501@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      There are a small handful of good use. Content moderation and automatic translation of voice chat is an example.

      Mostly though i think it will be AI used to generate content for the game, not during the game.