• @[email protected]
    link
    fedilink
    English
    34 months ago

    Not being combative or even disagreeing with you - purely out of curiosity, what do you think are the necessary and sufficient conditions of intelligence?

    • @[email protected]
      link
      fedilink
      English
      14 months ago

      A worldview simulation it can use as a scratch pad for reasoning. I view reasoning as a set of simulated actions to convert a worldview from state a to state b.

      It depends on how you define intelligence though. Normally people define it as human like, and I think there are 3 primary sub types of intelligence needed for cognizance, being reasoning, awareness, and knowledge. I think the current Gen is figuring out the knowledge type, but it needs to be combined with the other two to be complete.

      • @[email protected]
        link
        fedilink
        English
        14 months ago

        Thanks! I’m not clear on what you mean by a worldview simulation as a scratch pad for reasoning. What would be an example of that process at work?

        For sure, defining intelligence is non trivial. What clear the bar of intelligence, and what doesn’t, is not obvious to me. So that’s why I’m engaging here, it sounds like you’ve put a lot of thought into an answer. But I’m not sure I understand your terms.

        • @[email protected]
          link
          fedilink
          English
          04 months ago

          A worldview is your current representational model of the world around you, so for example you know you’re a human on earth in a physical universe when a set of rules, you have a mental representation of your body and it’s capabilities, your location and the physicality of the things in your location. It can also be abstract things too, like your personality and your relationships and your understanding of what’s capable in the world.

          Basically, you live in reality, but you need a way to store a representation of that reality in your mind in order to be able to interact with and understand that reality.

          The simulation part is your ability to imagine manipulating that reality to achieve a goal, and if you break that down, you’re trying to convert reality from your perceived current real state A, to a imagined desired state B. Reasoning is coming up with a plan to convert the worldview from state A to state B step by step, so let’s say you want to brush your teeth, you a want to convert your worldview of you having dirty teeth to you having clean teeth, and to do that you reason that you need to follow a few steps to achieve that, like moving your body to the bathroom, retrieving tools (toothbrush and toothpaste) and applying mechanical action to your teeth to clean them. You created a step by step plan to change the state of your worldview to a new desired state you came up with. It doesn’t need to be physical either, it could be an abstract goal, like calculating a tip for a bill. It can also be a grand goal, like going to college or creating a mathematical proof.

          LLMs don’t have a representational model of the world, they don’t have a working memory or a world simulation to use as a scratchpad for testing out reasoning. They just take a sequence of words and retrieve the next word that is probabilistically and relationally likely to be a good next word based on its training data.

          They could be a really important cortex that can assist in developing a worldview model, but in their current granular state of being a single task AI model, they cannot do reasoning on their own.

          Knowledge retrieval is an important component that assists in reasoning though, so it can still play a very important role in reasoning.

          • @[email protected]
            link
            fedilink
            English
            14 months ago

            Interesting. I’m curious to know more about what you think of training datasets. Seems like they could be described as a stored representation of reality that maybe checks the boxes you laid out. It’s a very different structure of representation than what we have as animals, but I’m not sure it can be brushed off as trivial. The way an AI interacts with a training dataset is mechanistic, but as you describe, human worldviews can be described in mechanistic terms as well (I do X because I believe Y).

            You haven’t said it, so I might be wrong, but are you pointing to freewill and imagination as somehow tied to intelligence in some necessary way?