I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • @Ziggurat
    link
    42 months ago

    have you played that game where everyone write a subjet and put it on a stack of paper, then everyone puts a verb on a different stack of paper, then everyone put an object on a third stack of paper, and you can even add a place or whatever on the next stack of paper. You end-up with fun sentences like A cat eat Kevin’s brain on the beach. It’s the kind of stuff (pre-)teen do to have a good laugh.

    Chat GPT somehow works the same way, except that instead of having 10 paper in 5 stack, it has millions of paper in thousands of stack and depending on the “context” will choose which stack it draws paper from (To take an ELI5 analogy)

    • HucklebeeOP
      link
      fedilink
      12 months ago

      I think what makes it hard to wrap your head around is that sometimes, this text is emotionally charged. What I notice is that it’s especially hard if an AI “goes rogue” and starts saying sinister and malicious things. Our brain immediatly jumps to “it has bad intent” when in reality it’s jus taking some reddit posts where it happened to connect some troll messages or extremist texts.

      How can we decouple emotionally when it feels so real to us?