• @[email protected]
      link
      fedilink
      51 month ago

      Not sure what would frighten me more: the fact that this is trainings data or if it was hallucinated

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        1 month ago

        Neither, in this case it’s an accurate summary of one of the results, which happens to be a shitpost on Quara. See, LLM search results can work as intended and authoritatively repeat search results with zero critical analysis!

    • @[email protected]
      link
      fedilink
      English
      41 month ago

      Pretty sure AI will start telling us “You should not believe everything you see on the internet as told by Abraham Lincoln”