• @conciselyverbose
    link
    English
    51 month ago

    Alex demonstrated that ChatGPT was lying intentionally

    No, he most certainly did not. LLMs have no agency. “Intentionally” doing anything isn’t possible.

    • @[email protected]OP
      link
      fedilink
      English
      -61 month ago

      LLMs have no agency.

      Define “agency”. Why do u have agency but an LLM doesn’t?

      “Intentionally” doing anything isn’t possible.

      I see “intention” as a goal in this context. ChatGPT explained that the goal was to make the conversation appear “natural” (which means human like). This was the intention/goal behind it lying to Alex.

      • @[email protected]
        link
        fedilink
        English
        31 month ago

        That “intention” is not made by ChatGPT, though. Their developers intend for conversation with the LLM to appear natural.

        • @[email protected]OP
          link
          fedilink
          English
          -11 month ago

          ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.

          • @[email protected]
            link
            fedilink
            English
            -11 month ago

            We have the ability to create our own intentions. Just because we follow others sometimes doesn’t change that.

            Also, if you wrote “I am conscious” on a piece of paper, does that mean the paper is conscious? Does this paper now have the intent to have a natural conversation with you? There is not much difference between that paper and what chatgpt is doing.

            • @[email protected]OP
              link
              fedilink
              English
              -11 month ago

              The main problem is the definition of what “us” means here. Our brain is a biological machine guided by the laws of physics. We have input parameters (stimuli) and output parameters (behavior).

              We respond to stimuli. That’s all that we do. So what does “we” even mean? The chemical reactions? The response to stimuli? Even a worm responds to stimuli. So does an amoeba.

              There sure is complexity in how we respond to stimuli.

              The main problem here is an absent objective definition of consciousness. We simply don’t know how to define consciousness (yet).

              This is primarily what leads to questions like u raised right now.