Given the propensity for LLMs to make profound mistakes by design due to their inability to discern context - something which the chucklefucks at OpenAI, Facebook, Google, etc. are not going to solve any time soon - it’d be pretty humiliating to only be as thorough as they are. Might I suggest paying more attention to what you’re doing?
That’s not the flex you think it is.
It is not even a flex though? Did you misread perhaps?
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
Given the propensity for LLMs to make profound mistakes by design due to their inability to discern context - something which the chucklefucks at OpenAI, Facebook, Google, etc. are not going to solve any time soon - it’d be pretty humiliating to only be as thorough as they are. Might I suggest paying more attention to what you’re doing?