Security News > 2025 > April > Applying Security Engineering to Prompt Injection Security

Applying Security Engineering to Prompt Injection Security
2025-04-29 11:03

This seems like an important advance in LLM security against prompt injection: Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content. […] To understand CaMeL, you need to understand that prompt injections happen when AI systems can’t distinguish between legitimate user commands and malicious instructions hidden in content they’re processing...


News URL

https://www.schneier.com/blog/archives/2025/04/applying-security-engineering-to-prompt-injection-security.html