Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.
That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.
o7
Thank you for service, toxic ass Stack Overflow commenters who are often wrong themselves and are then corrected by other, more toxic commenters.