- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
This is what I was thinking, if you give the code to a person and ask them to finish it, they would do the same.
If you rather ask the LLM to give some insights about the code, it might tell you what’s wrong with it.