- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
From https://twitter.com/llm_sec/status/1667573374426701824
- People ask LLMs to write code
- LLMs recommend imports that don’t actually exist
- Attackers work out what these imports’ names are, and create & upload them with malicious payloads
- People using LLM-written code then auto-add malware themselves
@Tovervlag Please don’t, I just read an article the other day about researchers demonstrating a prompt injection to take advantage of that very thing.
@erlingur