• snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    3 days ago

    I’m not saying they need to be perfect, but if they can make it recognize specific names they can keep it from saying ‘kill your self’.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      Why would you keep it from saying that when in certain contexts that’s perfectly acceptable? I explained exactly that point in another post.

      This is sort of a tangent in this case because what the AI said was very oblique—exactly the sort of thing it would be impossible to guard against. It said something like “come home to me,” which would be patently ridiculous to censor against, and impossible to anticipate that this would be the reaction to that phrase.

    • BreadstickNinja@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      3 days ago

      It likely is hard-coded against that, and it also didn’t say that in this case.

      Did you read the article with the conversation? The teen said he wanted to “come home” to Daenerys Targaryen and she (the AI) replied “please do, my sweet king.”

      It’s setting an absurdly high bar to assume an AI is going to understand euphemism and subtext as potential indicators of self-harm. That’s the job of a psychiatrist, a real-world person that the kid’s parents should have taken him to.