Summaries are often wrong, usually odd, sometimes funny, rarely helpful.

  • jacksilver@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    I think it’s cause it’s an area where being wrong is acceptable. In most other applications, the uncertainty of the correctness makes the LLM/AIs more dangerous to deploy.