• 0x0@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    16
    ·
    7 个月前

    So the LLM answers what’s relevant according to stereotypes instead of what’s relevant… in reality?

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      2
      ·
      edit-2
      7 个月前

      It just means there’s a bias in the data that is probably being amplified during training.

      It answers what’s relevant according to its training.