Mazdak@lemmy.org to Technology@lemmy.worldEnglish · 1 年前The first minds to be controlled by generative AI will live inside video gameswww.cnbc.comexternal-linkmessage-square78fedilinkarrow-up1229arrow-down135cross-posted to: [email protected]
arrow-up1194arrow-down1external-linkThe first minds to be controlled by generative AI will live inside video gameswww.cnbc.comMazdak@lemmy.org to Technology@lemmy.worldEnglish · 1 年前message-square78fedilinkcross-posted to: [email protected]
minus-squarebionicjoey@lemmy.calinkfedilinkEnglisharrow-up4arrow-down3·1 年前I can prove to you ChatGPT doesn’t have a mind. Just open up the Sunday Times Cryptic Crossword and ask ChatGPT to solve and explain the clues.
minus-squareOrderedChaos@lemmy.worldlinkfedilinkEnglisharrow-up10·1 年前I’m confused by this idea. Maybe I’m just seeing it from the wrong point of view. If you asked me to do the same thing I would fail miserably.
minus-squareKairuByte@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up5·1 年前Not the original intent, but you’d likely immediately throw your hands up and say you don’t know, an LLM would hallucinate an answer.
minus-squarebionicjoey@lemmy.calinkfedilinkEnglisharrow-up2arrow-down1·1 年前But some humans can, since they require simultaneous understanding of words’ meanings as well as how they are spelled
minus-squareGeneral_Effort@lemmy.worldlinkfedilinkEnglisharrow-up3arrow-down1·1 年前What should we conclude about most humans who cannot solve these crosswords? It should be relatively easy to train an LLM to solve these puzzles. I am not sure what that would show.
minus-squareGeneral_Effort@lemmy.worldlinkfedilinkEnglisharrow-up1·1 年前Can you please explain the reasoning behind the test?
I can prove to you ChatGPT doesn’t have a mind. Just open up the Sunday Times Cryptic Crossword and ask ChatGPT to solve and explain the clues.
I’m confused by this idea. Maybe I’m just seeing it from the wrong point of view. If you asked me to do the same thing I would fail miserably.
Not the original intent, but you’d likely immediately throw your hands up and say you don’t know, an LLM would hallucinate an answer.
But some humans can, since they require simultaneous understanding of words’ meanings as well as how they are spelled
What should we conclude about most humans who cannot solve these crosswords?
It should be relatively easy to train an LLM to solve these puzzles. I am not sure what that would show.
Can you please explain the reasoning behind the test?