Archived link: https://archive.ph/Vjl1M
Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.
This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won’t surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone’s behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer’s function is determined by its physical connections.”
It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.
Underlining how wild it is that this approach ever works. Neural networks are the miraculous way of the future - LLMs are a silly transitional step. LLMs are trained for plausibility, not correctness. To my knowledge nobody’s training on “exquisite corpse” counterexamples, or outright Time Cube bullshit, to teach the model when to pull the chute.
Half the problem with AI is that it’s accidentally okay at what fools and grifters insist it’s flawless at, and the other half is that the grifters have pushed it onto as many fools as possible.
The near future requires new questions. ‘What word comes next?’ is a fascinating proof of concept. But we’d be better-off with a model that grinds through a long prompt before answering the yes-no-maybe question: ‘Is this bullshit?’