Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.

  • @[email protected]
    link
    fedilink
    English
    11 year ago

    If you ask it for evidence Hitler was effective, it will give you what you asked for. It is incapable of looking at the bigger picture.

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      1 year ago

      it doesn’t even look at the smaller picture. LLMs build sentences by looking at what’s most statistically likely to follow the part of the sentence they have already built (based on the most frequent combinations from their training data). If they start with “Hitler was effective” LLMs don’t make any ethical consideration at all… they just look at how to end that sentence in the most statistically convincing imitation of human language that they can.

      Guardrails are built by painstakingly trying to add ad-hoc rules not to generate “combinations that contain these words” or “sequences of words like these”. They are easily bypassed by asking for the same concept in another way that wasn’t explicitly disabled, because there’s no “concept” to LLMs, just combination of words.

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        Yes, but in many defense the “smaller picture” I was alluding to was more like the 4096 tokens of context ChatGPT uses. I didn’t mean to suggest it was doing anything we’d recognize as forming an opinion.

        • @[email protected]
          link
          fedilink
          English
          21 year ago

          Sorry if I gave you the impression that I was trying to disagree with you. I just piggy-backed on your comment and sort of continued it. If you read them one after the other as one comment (at least iny head), they seem to flow well