• Bye@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Not how it works.

    It’s just a fancy version of that “predict the next word” feature smartphones have. Like if you just kept tapping the next word.

    They don’t even have real parameters, only black box bullshit hidden parameters.

    • funkless_eck
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I know, I was pointing out the irony.

      I’m convinced it’s only purpose is actually to give tech C-level and VPs some bullshit to say for roughly 18-36 months now that “blockchain” and “pandemic disruption” are dead.

      • Bye@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Exactly correct, I agree. LLMs will change the world, but 90% of purported use cases are nothing but hot air.

        But when you can tell your phone “go find a picture of an eggplant, put a smiley face on it, and send it to Bill”, that’s going to be pretty neat. And it’s coming in the next decade. Of course that requires a different model than we have now (text to instruction, not text to text). But it’s coming.