• Kichae@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Yuuuup.

    Language models, just like any model, only interpolate from what they’ve been trained on. They can answer questions they’ve seen the answer to a million times already easily enough, but it does that through stored word association, not reasoning.

    In other words, describe your symptoms in a way that isnt popular, and you’ll get “misdiagnosed”.

    And they have a real problem with making up citations of every type. Fabricating textbooks, newspaper articles, legal decisions, and entire academic journals. They can recognize the pattern and utilize it, but because repeated citations are relatively rare compared to other word combinations (most papers get cited dozens of times, not millions like LLMs need to make confident associations between words), they just fill in basically whatever into the citation format.