• BitSound@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    1 year ago

    Define intelligence. Your last line is kind of absurd. Why can’t intelligence be described by an algorithm?

    • Veraticus@lib.lgbt
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      5
      ·
      1 year ago

      LLMs do not think or feel or have internal states. With the same random seed and the same input, GPT4 will generate exactly the same output every time. Its speech is the result of a calculation, not of intelligence or self-direction. So, even if intelligence can be described by an algorithm, LLMs are not that algorithm.

      • BitSound@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        What exactly do you think would happen if you could make an exact duplicate of a human and run it from the same state multiple times? They would generate exactly the same output every time. How could you possibly think differently without turning to human exceptionalism and believing in magic meat?

        • loutr
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          They would generate exactly the same output every time.

          Maybe, you don’t know that for sure since we are currently far from actually understanding how our brain works. And the amount of external parameters and stimuli required to “run a human from the same state multiple times” dwarfs the input we’re currently feeding LLMs, and would be pretty much impossible to do without technology so advanced it might as well be magic.

      • SirGolan@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        For the record, GPT4 specifically is non-deterministic. The current theory is because it uses MoE, but that’s just a theory. Maybe OpenAI knows why. Also, it’s not a random seed, it’s temperature. If you set that to 0, the model should always select the most probable next token because the probability becomes 1 for that token and 0 for all others. GPT3 and most others are basically deterministic at that level, but not GPT4.