• kata1yst
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    3 days ago

    I build and train LLMs as part of my job, so I’m biased but informed.

    Large language models are literally text predictors. Their logic generates text probablistically calculated to give the correct result based their previous training parameters and current inputs.

    IMHO there isn’t room for actual thought, reflection, or emotion in the relatively simple base logic of the model, only probabilistic emulation of those things. This amounts to reading about a character in a story going though something traumatic and feeling empathy. It’s a totally appropriate human response, but the character is fictional. The LLM wouldn’t feel anything in your shoes.

    • edric@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 days ago

      This is probably not an original thought, but I just realized passing the Turing test doesn’t necessarily mean the computer has reached AI, just that it is successful enough to manipulate a human’s emotions to think it has.

      • kata1yst
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 days ago

        Absolutely. Academics debate the Turing test ad-nausium for this exact reason. It measures humans not computers.

    • jrs100000@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      3 days ago

      This is not a very strong argument. By the same logic you could claim that biological thought, reflection and emotion are impossible because its just clumps of fat squirting chemicals and electrical signals at each other. The fact of the matter is we dont know what causes consciousness, so we cant know if it could form from sufficiently complex statistical interactions.

      • kata1yst
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        3 days ago

        I can sort of see where you’re coming from, but i disagree.

        We know what the logic and data processing layers look like inside an LLM. We know what they do. We know generally how they connect, though that’s the domain of training and generally it’s hard to decipher the interconnections after training is performed.

        But really, all an LLM does is parse input and predict the next cluster of words. They don’t even have internal memory to store the last query, let alone an ongoing experience.

        I do believe AI capable of thinking and feeling even beyond human levels is inevitable. But it won’t be an encoder/decoder transformer LLM, which is basically all the current architectures.

        There are really cool and useful things we can do with LLMs in the meantime though, 99% of which won’t be chatbots.

        • jrs100000@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          We know lots of things about the mechanical functions of both human brains and LLMs, but that doesnt really help because we don’t know what causes consciousness in the first place. We dont know if internal memory is required, a sensory feedback loop, specific brain structures, or something else entirely.

          • kata1yst
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 days ago

            I concede I cannot prove a negative to you.

            To me (and many scientists much smarter than me), being conscious means constructing a chain of experience. A chain of experience requires some form of sensory perception combined with a memory to store those perceived sense experiences. So while it’s hard to prove something is conscious, it’s easy to evaluate something “likely is” or “probably isn’t” by considering it’s senses and memory capabilities in our best understanding.

            Therefore a cloud, lacking any structure to have sensory inputs and lacking any structure to store short or long term memory in can safely be classified as “unlikely to be conscious”.

            However a simple mammal like a mouse would qualify as “likely conscious”.

            An LLM, however, cannot sense the difference between being on and idle or off. It can’t sense the computer it’s running on. It’s only input is the text it’s fed. It does have access to a form of short term memory in it’s neural network- For example: input A’s first token lead to layer B182 at column 1444567. Input A’s second token leads from that position to another in layer C23 etc, but entirely lacks a way to store the “experience” of input A and cannot “reflect” on input A’s experience later. I think that puts it in the “unlikely conscious” category.

            I can see a path to intentionally get a neural network “likely conscious” with today’s technology, though I’d worry about the ethics and motivation.

            Now that’s consciousness. Then there’s sentience, which I (and again, many people smarter than me) think requires using consciousness, the ability to reflect on past conscious experience, a sense of self, and using that to construct a theory of what might happen in the near future to make intelligent decisions. Intelligent species like corvids, whales, elephants, apes, octopus, etc show significant signs of sentience by this definition. Sentience in computers I think it’s safe to say we’re still a ways away from.

            Edited several times for clarity, sorry

            • I_Has_A_Hat@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              2 days ago

              Part of the problem is there is no hard line between what is conscious and what isn’t.

              Oh sure, there are things you can definitely call conscious or non-conscious. A dog is conscious, a rock is non-conscious. But what about the things that fall somewhere in the murky middle?

              Are jellyfish conscious? They have no brain, and seem to react only to the most basic of stimuli. On the other hand, they do exhibit behaviour like courtship and mating, show aversion to things that harm them, and have been shown to have at least a rudimentary form of learning where they will associate certain stimuli with specific outcomes to help with things like avoiding predators or obstacles.

              How about plants? Are they conscious? They certainly react to some stimuli. They communicate with nearby plants to warn them of dangers and to share nutrients in times of stress. More studies are needed, but evidence has been coming out that not only do plants respond to music, but different plants have different tastes.

              It’s one of the hard questions in philosophy, and one I’m not sure we’ll ever be able to fully answer.

              • kata1yst
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                2 days ago

                I don’t disagree with the ambiguity here, but let’s focus on LLMs.

                I don’t understand the argument that an LLM is more conscious than traditional software. Take for example a spreadsheet with a timed macro.

                A spreadsheet with a timed macro stores state in both short term (RAM) memory and long term (in the spreadsheet table) memory and accepts many inputs. The macro timer gives it repeatable continuing execution, and a chance to process both past and present inputs with complexity.

                An LLM has limited short term memory (it’s current place in the calculation, stored in VRAM) and zero long term memory. It can only accept one input and cannot recall past experience to construct a chain of consciousness.

                The spreadsheet is objectively more likely to be conscious than the LLM. But no one argues for spreadsheet consciousness because it’s ridiculous. People argue for LLM consciousness simply because we sentient messy meat bags are evolutionarily programmed to construct theories of mind for everything and in so doing mistakenly personify things that remind us of ourselves. LLMs simply feel “alive-ish” enough to pass our evolutionary sniff test.

                But in reality, they’re less conscious than the spreadsheet.