Researchers have found that large language models (LLMs) tend to parrot buggy code when tasked with completing flawed snippets.

That is to say, when shown a snippet of shoddy code and asked to fill in the blanks, AI models are just as likely to repeat the mistake as to fix it.

  • mindbleach
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 days ago

    Yes, that’s how you’d expect it to work. That is how it do.

    Diffusion-based models are the ones that tweak everything. LLMs just keep going. Especially if you only ask it to “finish,” rather than say something like “fix.”