- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
A new study from Columbia Journalism Review showed that AI search engines and chatbots, such as OpenAI’s ChatGPT Search, Perplexity, Deepseek Search, Microsoft Copilot, Grok and Google’s Gemini, are just wrong, way too often.
What I have seen like duckAI in duckduckgo search it cites references and if you query GPT and other models with the specific data your looking towards it will cite and give links to where it sourced the input from like PubMed. Etc.
For instance I will query with something like give me a list of flowers that are purple, cite all sources and ensure accuracy of data provided by cross referencing with other studies while using previous chats as context.
I find it’s about how you type your queries and logic. Once you understand how the models work rather than blindly accepting them as supreme AI then you understand it’s limits and how to utilize the tool for what they are.
I really feel it should not be necessary to ask them to site all sources though. It should be default behavior.