Security News > 2024 > March > LLM Prompt Injection Worm
Researchers have demonstrated a worm that spreads through prompt injection.
In the second method, the researchers say, an image with a malicious prompt embedded makes the email assistant forward the message on to others.
While ongoing research highlighted risks associated with the GenAI layer of agents, a critical question emerges: Can attackers develop malware to exploit the GenAI component of an agent and launch cyber-attacks on the entire GenAI ecosystem?
This paper introduces Morris II, the first worm designed to target GenAI ecosystems through the use of adversarial self-replicating prompts.
The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output, engaging in malicious activities.
The worm is tested against three different GenAI models, and various factors influencing the performance of the worm are evaluated.
News URL
https://www.schneier.com/blog/archives/2024/03/llm-prompt-injection-worm.html