Someone once posted that they got GPT to produce much higher quality output by instructing it to make an argument, critique that argument, make a new argument with the critique in mind, and so on.
Maybe stable, precise reasoning is just a result of multiple language models arguing with each other.
Someone once posted that they got GPT to produce much higher quality output by instructing it to make an argument, critique that argument, make a new argument with the critique in mind, and so on.
Maybe stable, precise reasoning is just a result of multiple language models arguing with each other.
Removed by mod
yeah probably asking for more detail helps. GPT is limited in output length, so arguing and going to the detail will produce better output