• Lazycog@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    21
    ·
    8 hours ago

    This was an interesting raising-awareness project.

    And the article says they didn’t let the chatbot generate its own responses (and therefore produce LLM hallucinations) but rather used an LLM in the background to categorize user’s question and return an answer from said category.