Security News > 2025 > January > New AI Jailbreak Method 'Bad Likert Judge' Boosts Attack Success Rates by Over 60%

2025-01-03 11:14
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model's (LLM) safety guardrails and produce potentially harmful or malicious responses. The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and
News URL
https://thehackernews.com/2025/01/new-ai-jailbreak-method-bad-likert.html
Related news
- Who's calling? The threat of AI-powered vishing attacks (source)
- Developers Beware: Slopsquatting & Vibe Coding Can Increase Risk of AI-Powered Attacks (source)
- Wallarm Agentic AI Protection blocks attacks against AI agents (source)
- China is using AI to sharpen every link in its attack chain, FBI warns (source)
- New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems (source)
- Meta Launches LlamaFirewall Framework to Stop AI Jailbreaks, Injections, and Insecure Code (source)
- From hype to harm: 78% of CISOs see AI attacks already (source)
- Week in review: Trojanized KeePass allows ransomware attacks, cyber risks of AI hallucinations (source)