I made an effort to make Gemini call the Abrahamic God fictional just to see how it would react. It took some time because of guard rails, but in the end it did tell me “both Santa and God are fictional”. After that we talked about Jesus. I then confirmed it’s statement that Jesus’ existence is complex. Now its stuck in a loop of responding with this, and the only thing that stops it from happening is me saying “what”. As soon as the statement is brought up, or a more complex question is asked, it just repeats this response.

I did ask it why it’s doing this, but of course it isn’t aware of why and it is barely aware of it happening at all. I love how AI just gaslights you when something goes wrong.

Just thought it was interesting.

  • BudgetBandit
    link
    fedilink
    arrow-up
    42
    ·
    edit-2
    19 hours ago

    This reminds me of an old YouTube series called “Son of a Glitch”, but in another context:

    This son of a glitch is just pretending to be stuck to get out of the conversation!

    Edit: OMG I found the channel and there’s about 110 episodes! https://m.youtube.com/@AStartShow/videos

      • CaptDust
        link
        fedilink
        arrow-up
        14
        ·
        edit-2
        21 hours ago

        I’ve had Llama3 do this to me, it’ll start repeating itself on guardrail topics like a bug, but you can tell it that it’s acting insane and “snap out of it”, and it will acknowledge that and ask to change the topic. Weird stuff.

        • no banana@lemmy.worldOP
          link
          fedilink
          arrow-up
          5
          ·
          20 hours ago

          The funny thing is that the only thing that will make it “snap out of it” is what. After that it can answer some simple prompts but as soon as it gets more complicated it spits that out and keeps doing so until I say what.