Large language models continue to be unreliable for election information. Our research was able to substantially improve the reliability of safeguards in the Microsoft Copilot chatbot against election misinformation in German. However barriers to data access greatly restricted our investigations into other chatbots.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    3 days ago

    And everything else.

    People who love jerking off to AI probably wouldn’t care if their calculators were ‘close enough’. Or if their bank statement balance looked likely to be true.

  • Inucune@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    3 days ago

    Until you can explain the entire logic path from input to output, even if the ai makes a logical mistake… Then you can’t trust the data.

    I haven’t seen any results on another important requirement: An ai that can 'forget’or discard information that is bad.