• kate
    link
    fedilink
    English
    -11 month ago

    Can’t even rly blame the AI at that point

    • @[email protected]
      link
      fedilink
      111 month ago

      Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.

      • kate
        link
        fedilink
        English
        51 month ago

        Should an LLM try to distinguish satire? Half of lemmy users can’t even do that

        • @[email protected]
          link
          fedilink
          91 month ago

          Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.

        • ancap shark
          link
          fedilink
          11 month ago

          If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that