- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Companies are training LLMs on all the data that they can find, but this data is not the world, but discourse about the world. The rank-and-file developers at these companies, in their naivete, do not see that distinction…So, as these LLMs become increasingly but asymptotically fluent, tantalizingly close to accuracy but ultimately incomplete, developers complain that they are short on data. They have their general purpose computer program, and if they only had the entire world in data form to shove into it, then it would be complete.
i think most ML researchers are aware that the data isn’t perfect, but, crucially, it exists in a digestible form.