• Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    7 months ago

    I think solving the AI hallucination problem — I think that’ll be fixed.

    Wasn’t this an unsolvable problem?

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      7 months ago

      it’s unsolvable because it’s literally how LLMs work lol.

      though to be fair i would indeed love for them to solve the LLMs-outputting-text problem.

        • zogwarg@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 months ago

          Sed Quis custodiet ipsos custodes = But who will control the controllers?

          Which in a beautiful twist of irony is thought to be an interpolation in the texts of Juvenal (in manuscript speak, an insert added by later scribes)

    • VirtualOdour
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      Yeah but only in one limited way of doing things, like how you can’t raise water using geometry alone but obviously there’s endless things like lockgates, pumps, etc which can be added into a water transport system to raise it.

      It is a hard one though, even people do the exact same thing llms do - Mandela effect and innacuracy of witness testimony are clear examples. Sometimes we don’t know we don’t know something, or we’re sure we do - visual illusions where our mind fills in blanks is a similar thing. The human brain has a few little loops we take things through which are basically sanity checks - not everyone does the same level of thinking about what they’re saying though, Alex Jones, Trump, certain people on lemmy aren’t interested in if what they’re saying is true simply that it serves their purpose. It’s learnt behavior and we can construct nns that contain the same sort of sanity checking, or go a level beyond and have it behind the scenes create a layer of axioms and information points associated with the answer and test them individually against a fact checking network.

      It’s all stuff that we’re going to be seeing tried in the upcoming gpt5, self tasking is the next big step to get right - working out the process required to obtain an accurate answer and working through the steps.