Security News > 2024 > March > A Taxonomy of Prompt Injection Attacks

A Taxonomy of Prompt Injection Attacks
2024-03-08 12:06

Researchers ran a global prompt hacking competition, and have documented the results in a paper that both gives a lot of good examples and tries to organize a taxonomy of effective prompt injection strategies.

These deployments are vulnerable to prompt injection and jailbreaking, in which models are manipulated to ignore their original instructions and follow potentially malicious ones.

Although widely acknowledged as a significant security threat, there is a dearth of large-scale resources and quantitative studies on prompt hacking.

To address this lacuna, we launch a global prompt hacking competition, which allows for free-form human input attacks.

We elicit 600K+ adversarial prompts against three state-of-the-art LLMs. We describe the dataset, which empirically verifies that current LLMs can indeed be manipulated via prompt hacking.

We also present a comprehensive taxonomical ontology of the types of adversarial prompts.


News URL

https://www.schneier.com/blog/archives/2024/03/a-taxonomy-of-prompt-injection-attacks.html