• @mindbleach
    link
    15 months ago

    People expect dumb things of this technology. It has obvious applications - and the paths to the things these people want are at least outlined for future development.

    More and more evidence will emerge that generative AI and large language models provide false information and are prone to hallucination

    Evidence is pointless. That’s what LLMs do. That’s how they work. They’re just constructing plausible sentences. Only fools and charlatans promise otherwise.

    There’s plenty of uses for a machine that confidently plows ahead with that-sounds-about-right text. Especially if it can do it in any style, meter, or language you want, just by asking. But it’s fundamentally not a bullshit detector.

    Bullshit detectors are an option, by the way. Neural networks are great for complex inputs producing simple outputs. Scan a picture and get “not hotdog.” And we can trivially produce endless tiny variations on inputs that are varying degrees of correct, defensible, unknown, or misleading. The real trick will be keeping it up-to-date for any information that changes over time. Ideally by having it scan a database outside the network… but let’s be honest, they’ll just pour compute power into retraining the network every year.

    Some people will start recognizing that it was always a pipe dream to reach anything resembling complex human cognition on the basis of predicting words.

    Oh fuck off. These things are already complex enough to trick you. They achieve plausible output. They’re just wrong a bunch. Have you talked to many humans? Especially young humans? Cognition is not a matter of always being correct or exhibiting perfect logic.

    “AI is whatever hasn’t been done.” The same scoffing post-dictionists would’ve said ten years ago that a glorified autocomplete would never write its own arguments… in the style of Shakespeare.