• Empricorn@feddit.nl
    link
    fedilink
    English
    arrow-up
    63
    ·
    7 months ago

    Some “AI” LLMs resort to light hallucinations. And then ones like this straight-up gaslight you!

    • eatCasserole@lemmy.world
      link
      fedilink
      arrow-up
      50
      ·
      7 months ago

      Factual accuracy in LLMs is “an area of active research”, i.e. they haven’t the foggiest how to make them stop spouting nonsense.

      • Swedneck@discuss.tchncs.de
        link
        fedilink
        arrow-up
        28
        ·
        7 months ago

        duckduckgo figured this out quite a while ago: just fucking summarize wikipedia articles and link to the precise section it lifted text from

      • Excrubulent@slrpnk.net
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        7 months ago

        Because accuracy requires that you make a reasonable distinction between truth and fiction, and that requires context, meaning, understanding. Hell, full humans aren’t that great at this task. This isn’t a small problem, I don’t think you solve it without creating AGI.