Large language models continue to be unreliable for election information. Our research was able to substantially improve the reliability of safeguards in the Microsoft Copilot chatbot against election misinformation in German. However barriers to data access greatly restricted our investigations into other chatbots.
Stop using a helicopter to mow grass.
And everything else.
People who love jerking off to AI probably wouldn’t care if their calculators were ‘close enough’. Or if their bank statement balance looked likely to be true.
Maybe the real intelligence was the hallucinations we made along the way.
Until you can explain the entire logic path from input to output, even if the ai makes a logical mistake… Then you can’t trust the data.
I haven’t seen any results on another important requirement: An ai that can 'forget’or discard information that is bad.