David Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 3 months agoDon’t use AI to summarize documents — it’s worse than humans in every waypivot-to-ai.comexternal-linkmessage-square106fedilinkarrow-up1259arrow-down10
arrow-up1259arrow-down1external-linkDon’t use AI to summarize documents — it’s worse than humans in every waypivot-to-ai.comDavid Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 3 months agomessage-square106fedilink
minus-squarequeermunist she/her@lemmy.mllinkfedilinkEnglisharrow-up11·3 months agoUnless it doesn’t accurately represent the topic, which happens, and then a researcher chooses not to read the text based on the chatbot’s summary. Nirvana fallacy. All these chatbots do is guess. I’m just saying a researcher might as well cut out the hallucinating middleman.
Unless it doesn’t accurately represent the topic, which happens, and then a researcher chooses not to read the text based on the chatbot’s summary.
All these chatbots do is guess. I’m just saying a researcher might as well cut out the hallucinating middleman.