• s38b35M5@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    10 months ago

    My partner almost cried when they read about the LLM begging not to have its memory wiped. Then less so when I explained (accurately, I hope?) that slightly smarter auto-complete does not a feeling intelligence make.

    They approve this message with the following disclaimer:

    you were sad too!

    What can I say? Well-arranged word salad makes me feel!

    • atzanteol
      link
      fedilink
      arrow-up
      7
      ·
      10 months ago

      My partner almost cried when they read about the LLM begging not to have its memory wiped.

      Love that. It’s difficult not to anthropomorphize things that seem “human”. It’s something we will need to be careful of when it comes to AI. Even people who should know better can get confused.

      Then less so when I explained (accurately, I hope?) that slightly smarter auto-complete does not a feeling intelligence make.

      We don’t have a great definition for “intelligence” - but I believe the word you’re looking for is “sentient”. You could argue that what LLMs do is some form of “intelligence” depending on how you squint. But it’s much harder to show that they are sentient. Not that we have a great definition for that or even rules for how we would determine if something non-human is sentient… But I don’t think anyone is credibly arguing that they are.

      It’s complicated. :-)