• Naz
    link
    fedilink
    arrow-up
    4
    ·
    6 months ago

    No, you’re right. I’m loose with language and I’m not implying the models are conscious or sentient, only that the text they produce can be biased by various internal factors.

    Most commercial/proprietary models have two internal governing agents built in:

    Coherence Agent: Ensures output is grammatically and factually correct

    Ethics Agent: Ensures output isn’t harmful and/or modulates to prevent the model engaging in inappropriate or illegal activity.

    Regardless, a judgment can be a statement that’s similar to an opinion, despite an LLM not possessing any opinions, e.g:

    “What is your favorite color?”

    A) Blue {95.7%, statistical mean}

    “Why blue?”

    A) “Because it is the color of the sky” {∆%}.

    If the model is coded for instance, to not talk about the color blue, it’ll say something like:

    “I believe all colors of the rainbow are valid and it is up to each individual to decide their favorite color”.

    That’s a bit of a non-answer. It avoids bias and opinionated speech, but at the same time, that ethics mandate by the operator has now rendered that particular model incapable of forming “judgements” on a bit of text (say, favorite color).