• booly
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 months ago

    I guess I don’t understand how one would expect an LLM or other system like that (to include human brains) to be able to answer 100% of these questions, just because it’s ingested every book about a topic.

    It’s inherently an exercise in applying a bunch of rules in a probabilistic manner, so expecting 100% accuracy would be unreasonable, even of a person or system who did successfully memorize every single rule.