Security News > 2024 > October > 20% of Generative AI ‘Jailbreak’ Attacks Succeed, With 90% Exposing Sensitive Data

2024-10-09 16:29
On average, it takes adversaries just 42 seconds and five interactions to execute a GenAI jailbreak, according to Pillar Security.
News URL
https://www.techrepublic.com/article/genai-jailbreak-report-pillar-security/
Related news
- Who's calling? The threat of AI-powered vishing attacks (source)
- Developers Beware: Slopsquatting & Vibe Coding Can Increase Risk of AI-Powered Attacks (source)
- Wallarm Agentic AI Protection blocks attacks against AI agents (source)
- China is using AI to sharpen every link in its attack chain, FBI warns (source)
- New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems (source)
- Meta Launches LlamaFirewall Framework to Stop AI Jailbreaks, Injections, and Insecure Code (source)
- From hype to harm: 78% of CISOs see AI attacks already (source)
- Week in review: Trojanized KeePass allows ransomware attacks, cyber risks of AI hallucinations (source)