20
A prevailing sentiment online is that GPT-4 still does not understand what it talks about. We can argue semantics over what “understanding” truly means. I think it’s useful, at least today, to draw the line at whether GPT-4 has succesfully modeled parts of the world. Is it just picking words and connecting them with correct grammar? Or does the token selection actually reflect parts of the physical world?
One of the most remarkable things I’ve heard about GPT-4 comes from an episode of This American Life titled “Greetings, People of Earth”.
Maybe, you don’t know that for sure since we are currently far from actually understanding how our brain works. And the amount of external parameters and stimuli required to “run a human from the same state multiple times” dwarfs the input we’re currently feeding LLMs, and would be pretty much impossible to do without technology so advanced it might as well be magic.