Security News > 2024 > August > Who uses LLM prompt injection attacks IRL? Mostly unscrupulous job seekers, jokesters and trolls

Who uses LLM prompt injection attacks IRL? Mostly unscrupulous job seekers, jokesters and trolls
2024-08-13 10:46

Despite worries about criminals using prompt injection to trick large language models into leaking sensitive data or performing other destructive actions, most of these types of AI shenanigans come from job seekers trying to get their resumes past automated HR screeners - and people protesting generative AI for various reasons, according to Russian security biz Kaspersky.

Prompt injection happens when a user feeds a model with a particular input intended to force the LLM to ignore its prior instructions and do something it's not supposed to do.

Prompt injection attacks involve overriding that prior instruction.

In its most recent research, Kaspersky set out to determine who is using prompt injection attacks in real-world situations, and for what purposes.

In addition to direct prompt injection, the team also took a look at attempts at indirect prompt injection - when someone prompts LLMs to do something bad by embedding the injections in a webpage or online document.

They did see "Active use of prompt injection" in human resources and job recruiting, "Where LLM-based technologies are deeply embedded and where the incentives to game the system in the hope of landing that dream job are strong." The idea here being to catch out and manipulate bots scraping online profiles and other pages for resumes to recommend for particular jobs, by including some text to make sure the models look more favorably on the job seeker.


News URL

https://go.theregister.com/feed/www.theregister.com/2024/08/13/who_uses_llm_prompt_injection/