• xmunk
    link
    fedilink
    arrow-up
    5
    ·
    4 months ago

    Is there a non-AI enhanced one that would be less prone to random hallucinations?

    • ilmagico@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      4 months ago

      Non-AI options can also have “hallucinations” i.e. false positives and false negatives, so if the AI one has a lower false positive/false negative rate, I’m all for it.

      • xmunk
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        Non-AI options are comprehensible so we can understand why they’re failing. When it comes to AI systems we usually can’t reason about why or when they’ll fail.

        • unreliable@discuss.tchncs.de
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          But people are getting dumb and wolud prefer a magic box they don’t understand that a method that they can know when it is wrog