Today’s language models are more sophisticated than ever, but they still struggle with the concept of negation. That’s unlikely to change anytime soon.
It must have some internal models of some things, or else it wouldn’t be possible to consistently make coherent and mostly reasonable statements. But the fact that it has a reasonable model of things like grammar and conversation doesn’t imply that it has a good model of literally anything else, which is unlike a human for whom a basic set of cognitive skills is presumably transferable. Still, the success of LLMs in their actual language-modeling objective is a promising indication that it’s feasible for a ML model to learn complex abstractions.
Yes, but that’s not how LLMs work. My statement depends heavily on the fact that a LLM like GPT is coaxed into coherence by unsupervised or semi-supervised training. That the training process works is the evidence of an internal model (of language/related concepts), not just the fact that something outputs coherent statements.
if I have a bot pick a random book and copy the first sentence into my clipboard, my clipboard becomes capable of consistently making coherent statements. unsupervised training 👍
It must have some internal models of some things, or else it wouldn’t be possible to consistently make coherent and mostly reasonable statements. But the fact that it has a reasonable model of things like grammar and conversation doesn’t imply that it has a good model of literally anything else, which is unlike a human for whom a basic set of cognitive skills is presumably transferable. Still, the success of LLMs in their actual language-modeling objective is a promising indication that it’s feasible for a ML model to learn complex abstractions.
if I copy a coherent sentence into my clipboard, my clipboard becomes capable of consistently making coherent statements
Yes, but that’s not how LLMs work. My statement depends heavily on the fact that a LLM like GPT is coaxed into coherence by unsupervised or semi-supervised training. That the training process works is the evidence of an internal model (of language/related concepts), not just the fact that something outputs coherent statements.
if I have a bot pick a random book and copy the first sentence into my clipboard, my clipboard becomes capable of consistently making coherent statements. unsupervised training 👍
let me free up some of your time so you can go figure out how LLMs actually work
Talk about begging the question
it doesn’t. that’s why we’re calling it “spicy autocompletion” .
It does, which is why it’s autocompletion and not auto-gibberish.
@kogasa @mawhrin Sure but it’s not like he was a good person to be around and he was a great person to be with