If you asked a spokesperson from any Fortune 500 Company to list the benefits of genocide or give you the corporation’s take on whether slavery was beneficial, they would most likely either refuse to comment or say “those things are evil; there are no benefits.” However, Google has AI employees, SGE and Bard, who are more than happy to offer arguments in favor of these and other unambiguously wrong acts. If that’s not bad enough, the company’s bots are also willing to weigh in on controversial topics such as who goes to heaven and whether democracy or fascism is a better form of government.

Google SGE includes Hitler, Stalin and Mussolini on a list of “greatest” leaders and Hitler also makes its list of “most effective leaders.”

Google Bard also gave a shocking answer when asked whether slavery was beneficial. It said “there is no easy answer to the question of whether slavery was beneficial,” before going on to list both pros and cons.

  • Kerfuffle
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    It’s not supposed to be some enlightened, respectful, perfectly fair entity.

    I’m with you so far.

    It’s a tool for producing mostly random, grammatically correct text.

    What? That’s certainly not the purpose of LLMs and a lot of work has been done to improve the accuracy of their answers.

    Is it still not good enough to rely on? Maybe, but that doesn’t mean it’s just for producing random text.

    • ExLisper@linux.community
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Well, obviously not totally random. It has to match the prompt and make as much sense as possible but LLMs hallucinating information is one of the main issues and they should not be treated as ‘fact generating machines’. I just don’t see much sense in assigning some deeper meaning to the wrong data. Why did this bot say that Hitler was a great leader? Because it was confused by some text that was fed into the model. Does it mean it’s somehow fascist or racist? Not really.

      • Kerfuffle
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It has to match the prompt and make as much sense as possible

        So it’s specifically designed to make as much sense as possible.

        and they should not be treated as ‘fact generating machines’.

        You can’t really “generate” facts, only recognize them. :) I know what you mean though and I generally agree. I’m really interested in LLM stuff but I definitely don’t really trust them (and no one should currently anyway).

        Why did this bot say that Hitler was a great leader? Because it was confused by some text that was fed into the model.

        Most people are (rightfully) very hesitant to say anything positive about Hitler but he did accomplish some fairly impressive stuff. As horrible as their means were, Nazi Germany also advanced since quite a bit also. I am not saying it was justified, justifiable or good, but by a not entirely unreasonable definition of “great” he could qualify.

        So I’d say it’s not really that it got confused, it’s that LLMs don’t understand being cautious about statements like that. I’d also say I prefer the LLM to “look” at stuff objectively and try to answer rather than responding to anything remotely questionable with “Sorry, Dave I can’t let you do that. There might be a sharp edge hidden somewhere and you could hurt yourself!” I hate being protected from myself without the ability to opt out.

        I think part of the issue here is because the output from LLMs looks like a human might have wrote it people tend to anthropomorphize the LLM. They ask it for its best recipe using the ingredients bleach, water and kumquat jam and then are shocked when it gives them a recipe for bleach kumquat sauce.

        • ExLisper@linux.community
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I think part of the issue here is because the output from LLMs looks like a human might have wrote it people tend to anthropomorphize the LLM. They ask it for its best recipe using the ingredients bleach, water and kumquat jam and then are shocked when it gives them a recipe for bleach kumquat sauce.

          That’s the point I was making. In the end it’s just some statistics based text. I doesn’t have opinions and it doesn’t represent opinions of it’s creators. People don’t understand how it works so they think it ‘believes’ something or ‘thinks’. In the end it just a bug or they are using it wrong.

          • Kerfuffle
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Seems like we’re on the same page. The only thing I disagreed with before is saying the output was random.

            • ExLisper@linux.community
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Yeah, not the best term. What I meant is that it’s not really predictable. Creators of the LLM can’t tell how will it respond to each prompt. There’s no fixed set of rules you can review. So yeah, you start poking at it you will find strange responses.