Meta “programmed it to simply not answer questions,” but it did anyway.

  • doodledup@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    3 个月前

    That’s not information retrieval. There is a difference between asking it about historical events and asking it to come up with their own stuff based on reasoning. I know that it can be wrong about factual questions and I embrace that. OP and many others don’t understand that and think it’s a problem when the AI gives a wrong answer about a specific question. You’re simply using it wrong.

    It’s been a while since ChatGPT4 has spit out non-working bullshit code for me. And if it does, I immediately notice it and it’ll still be a time-saver because there is at least something I can take from every response even if it’s a wrong response. I’m using it as intended. And I see value in it. So keep convincing yourself it’s terrible, but stop being annoying about it if others disagree.

    • trollbearpig@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 个月前

      Jesus man, chill. Why are all AI people so sensitive? Hahahaha. My man, during this conversation I have only asked about what are the great apps that LLMs have provided. You answered with the usual ones, chatgpt and copilot. It’s nice that you find them useful, no need to insult me just because I don’t think they are useful. I was honestly hoping for something else, but that’s it. Seriously, chill dude.