Large language models continue to be unreliable for election information. Our research was able to substantially improve the reliability of safeguards in the Microsoft Copilot chatbot against election misinformation in German. However barriers to data access greatly restricted our investigations into other chatbots.
Until you can explain the entire logic path from input to output, even if the ai makes a logical mistake… Then you can’t trust the data.
I haven’t seen any results on another important requirement: An ai that can 'forget’or discard information that is bad.