I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

    • kaffiene@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      In the sense that the “argument” is an intuition pump. As an anti ai argument it’s weak - you could replace the operator in the Chinese room with an operator in an individual neuron and conclude that our brains don’t know anything, either