• Cephirux@lemmings.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          1 year ago

          I admit I might be biased towards AI because I believe AI isn’t biased because it doesn’t have any desire, to sleep, breath, eat, etc. Everyone is capable of critical thinking, the question is, is it good or not? And since AI is trained by humans and humans have critical thinking, I don’t see why AI cannot develop one, although it may not be as good as some people.

          • 9point6@lemmy.world
            link
            fedilink
            arrow-up
            11
            ·
            edit-2
            1 year ago

            All AI has to be biased, that bias is the training data and (inherently biased) humans select the training set. Funnily enough, the weights on each node of a neutral net are even sometimes called biases!

            If any AI wasn’t biased it would simply produce unintelligible garbage.

          • jungle@lemmy.world
            link
            fedilink
            arrow-up
            7
            arrow-down
            1
            ·
            edit-2
            1 year ago

            That’s not AI works. It’s exactly as biased as the humans who produced the content on which it is trained.

            That said, I also don’t believe these models have been trained exclusively on white straight men’s conversations, that would take some effort to achieve.

            More likely, it’s been trained on internet forums, so similar to what it’s being asked to moderate. And as long as there’s a human at the other end of an appeal, it should be fine.

            • btaf45@lemmy.worldOP
              link
              fedilink
              arrow-up
              4
              ·
              1 year ago

              I’m a computer scientist, and I will tell you right now that AI is biased.

              AI is also constantly wrong.

              ChatGPT lies about science.

              ChatGPT lies about history

              ChatGPT lies about politics

              ChatGPT lies about nonexistent programming libraries

              ChatGPT lies about nonexistent legal cases

              ChatGPT lies about nonexistent criminal backgrounds

              The only time I would trust ChatGPT is when there are no right and wrong answers.

          • AA5B@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 year ago

            All AI does is look for patterns to complete. You train it on some set of data such as Reddit, which can be biased, and set some sort of feedback for whether it makes the right choice, which can be biased, and find out what patterns it thinks it sees, which may be biased, to apply to new situations

    • Chaos@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      1 year ago

      Just checked this with an AI detector and it said human. Bot 1, human 0. This sentance kinda undermined your point for keeping humans only.