Ouch.

  • OsrsNeedsF2P@lemmy.ml
    link
    fedilink
    English
    arrow-up
    26
    ·
    8 hours ago

    Holy smokes I stand corrected. The chatbot actually misunderstood the context to the point it told the human to die, out of the blue.

    It’s not every day you get shown a source that proves you wrong. Thanks kind stranger

    • kautau@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      4 hours ago

      Yeah holy shit, screenshotting this in case Google takes it down, but this leap is wild

    • megane-kun@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      8 hours ago

      No problem. I understand the skepticism here, especially since the article in the OP is a bit light on the details.


      EDIT:

      Details on the OP article is fine enough, but it didn’t link sources.

    • Mog_fanatic@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      5 hours ago

      One thing that throws me off here is the double response. I haven’t used Gemini a ton but it has never once given me multiple replies. It is always one statement per my one statement. You can see at the end here there’s a double response. It makes me think that there’s some user input missing. There’s also missing text in the user statements leading up to it as well which makes me wonder what the person was asking in full. Something about this still smells fishy to me but I’ve heard enough goofy things about how AIs learn weird shit to believe it’s possible.

      • WolfLink
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        Idk what you mean “double response”. The user typed a statement, not a question, and the AI responded with its weird answer.

        I think the lack of a question or specific request in the user text led to the weird response.