@[email protected] to Lemmy [email protected] • 1 month agoAI is the futurelemmy.worldimagemessage-square71fedilinkarrow-up1867arrow-down110
arrow-up1857arrow-down1imageAI is the futurelemmy.world@[email protected] to Lemmy [email protected] • 1 month agomessage-square71fedilink
minus-square@[email protected]linkfedilink11•1 month agoSure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.
minus-squarekatelinkfedilinkEnglish5•1 month agoShould an LLM try to distinguish satire? Half of lemmy users can’t even do that
minus-square@[email protected]linkfedilink9•1 month agoDo you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.
minus-square@[email protected]linkfedilink4•1 month agoIt should if you are gonna feed it satire to learn from
minus-square@[email protected]linkfedilinkEnglish2•1 month agoSarcasm detection is a very hard problem in NLP to be fair
minus-squareancap sharklinkfedilink1•1 month agoIf it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that
Can’t even rly blame the AI at that point
Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.
Should an LLM try to distinguish satire? Half of lemmy users can’t even do that
Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.
It should if you are gonna feed it satire to learn from
Sarcasm detection is a very hard problem in NLP to be fair
If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that