• Kogasa@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    8
    ·
    6 months ago

    It must have some internal models of some things, or else it wouldn’t be possible to consistently make coherent and mostly reasonable statements. But the fact that it has a reasonable model of things like grammar and conversation doesn’t imply that it has a good model of literally anything else, which is unlike a human for whom a basic set of cognitive skills is presumably transferable. Still, the success of LLMs in their actual language-modeling objective is a promising indication that it’s feasible for a ML model to learn complex abstractions.

    • sc_griffith@awful.systems
      link
      fedilink
      English
      arrow-up
      25
      ·
      6 months ago

      if I copy a coherent sentence into my clipboard, my clipboard becomes capable of consistently making coherent statements

      • Kogasa@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        9
        ·
        edit-2
        6 months ago

        Yes, but that’s not how LLMs work. My statement depends heavily on the fact that a LLM like GPT is coaxed into coherence by unsupervised or semi-supervised training. That the training process works is the evidence of an internal model (of language/related concepts), not just the fact that something outputs coherent statements.

        • sc_griffith@awful.systems
          link
          fedilink
          English
          arrow-up
          14
          ·
          edit-2
          6 months ago

          if I have a bot pick a random book and copy the first sentence into my clipboard, my clipboard becomes capable of consistently making coherent statements. unsupervised training 👍

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          14
          ·
          6 months ago

          let me free up some of your time so you can go figure out how LLMs actually work

    • slopjockey@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 months ago

      It must have some internal models of some things, or else it wouldn’t be possible to consistently make coherent and mostly reasonable statements.

      Talk about begging the question