Security News > 2025 > February > On Generative AI Security

Microsoft’s AI Red Team just published “Lessons from Red Teaming 100 Generative AI Products.” Their blog post lists “three takeaways,” but the eight lessons in the report itself are more useful: Understand what the system can do and where it is applied. You don’t have to compute gradients to break an AI system. AI red teaming is not safety benchmarking. Automation can help cover more of the risk landscape. The human element of AI red teaming is crucial. Responsible AI harms are pervasive but difficult to measure. LLMs amplify existing security risks and introduce new ones...
News URL
https://www.schneier.com/blog/archives/2025/02/on-generative-ai-security.html
Related news
- How AI and ML are transforming digital banking security (source)
- AI-driven insights transform security preparedness and recovery (source)
- AI security posture management will be needed before agentic AI takes hold (source)
- Deploying AI at the edge: The security trade-offs and how to manage them (source)
- Taiwan Bans DeepSeek AI Over National Security Concerns, Citing Data Leakage Risks (source)
- AI and Security - A New Puzzle to Figure Out (source)
- Google Chrome's AI-powered security feature rolls out to everyone (source)
- CrowdStrike Security Report: Generative AI Powers Social Engineering Attacks (source)
- Innovation vs. security: Managing shadow AI risks (source)
- AI threats and workforce shortages put pressure on security leaders (source)