• plasticcheese@lemmy.one
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 days ago

    The more I use various LLMs, the more I’ve come to realise that they have a tendency to confidently lie. More often than not, it seems an LLM will give me the answer it thinks I want to hear, even if the details of it’s answer are factually incorrect.

    Using these tools to decide and affect real peoples lives is a very dangerous prospect.

    Interesting article. Thanks

  • SubArcticTundra@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    Thank you for this much needed reality check. I don’t understand why the Government are doing venture capital’s bidding.

  • aaron@infosec.pub
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    10
    ·
    edit-2
    2 days ago

    Presumably ‘AI’ can make simple rules based decisions, if done properly (unfortunately, being the UK government this is a big ‘if’).

    But what exactly is sacking a million people supposed to do to the economy?

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      Presumably ‘AI’ can make simple rules based decisions, if done properl

      honest question: was this meant seriously, or in jest?

      • aaron@infosec.pub
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        9
        ·
        2 days ago

        Serious.

        1. Fill in form online
        2. AI analyses it, decides if applicant is entitled to benefits.

        Why do you ask the question?

        • Tar_Alcaran
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          “AI” in the context of the article is “LLMs”. So, the definition of not trustworthy.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          ·
          2 days ago

          why do you think hallucinating autocomplete can make rules-based decisions reliably

          AI analyses it, decides if applicant is entitled to benefits.

          why do you think this is simple