• Amoeba_Girl@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    4 months ago

    Yes that is indeed the sort of question I was expecting. But anyway good thing the LLM didn’t have just one book, but oodles of books and expert opinion and past exam data at its disposal! Oh wait it didn’t help and the machine especially made to give correct answers failed to give correct answers :(

    • booly
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      I guess I don’t understand how one would expect an LLM or other system like that (to include human brains) to be able to answer 100% of these questions, just because it’s ingested every book about a topic.

      It’s inherently an exercise in applying a bunch of rules in a probabilistic manner, so expecting 100% accuracy would be unreasonable, even of a person or system who did successfully memorize every single rule.