• hoshikarakitaridia@lemmy.world
    link
    fedilink
    arrow-up
    26
    arrow-down
    13
    ·
    9 hours ago

    Because in a lot of applications you can bypass hallucinations.

    • getting sources for something
    • as a jump off point for a topic
    • to get a second opinion
    • to help argue for r against your position on a topic
    • get information in a specific format

    In all these applications you can bypass hallucinations because either it’s task is non-factual, or it’s verifiable while promoting, or because you will be able to verify in any of the superseding tasks.

    Just because it makes shit up sometimes doesn’t mean it’s useless. Like an idiot friend, you can still ask it for opinions or something and it will definitely start you off somewhere helpful.

    • WalnutLum@lemmy.ml
      link
      fedilink
      arrow-up
      16
      arrow-down
      2
      ·
      5 hours ago

      All LLMs are text completion engines, no matter what fancy bells they tack on.

      If your task is some kind of text completion or repetition of text provided in the prompt context LLMs perform wonderfully.

      For everything else you are wading through territory you could probably do easier using other methods.

    • ms.lane@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      8 hours ago

      Also just searching the web in general.

      Google is useless for searching the web today.

      • fibojoly
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        3 hours ago

        Not if you want that thing that everyone is on about. Don’t you want to be in with the crowd?! /s