Why would anyone want to cite something in a scientific paper that the reader can not access or verify and that could or could not be partially or completely fictional?
At the very least when it comes to analyzing the LLMs, e.g. to discuss biases, there should be some common understanding of how to reference the things that are discussed.
But this is a poor attempt. They just point to the general ChatGPT URL and the date, ignoring that at any time multiple models are available.
Why would anyone want to cite something in a scientific paper that the reader can not access or verify and that could or could not be partially or completely fictional?
At the very least when it comes to analyzing the LLMs, e.g. to discuss biases, there should be some common understanding of how to reference the things that are discussed.
But this is a poor attempt. They just point to the general ChatGPT URL and the date, ignoring that at any time multiple models are available.