@[email protected] to [email protected]English • 1 month agoWe have to stop ignoring AI’s hallucination problemwww.theverge.comexternal-linkmessage-square209fedilinkarrow-up1531arrow-down129
arrow-up1502arrow-down1external-linkWe have to stop ignoring AI’s hallucination problemwww.theverge.com@[email protected] to [email protected]English • 1 month agomessage-square209fedilink
minus-square@[email protected]linkfedilinkEnglish2•edit-21 month agoThey are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
minus-square@[email protected]linkfedilinkEnglish0•1 month agoYour 1 sentence makes more sense than the slop above.
They are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
Your 1 sentence makes more sense than the slop above.