Security News > 2025 > February > On Generative AI Security

Microsoft’s AI Red Team just published “Lessons from Red Teaming 100 Generative AI Products.” Their blog post lists “three takeaways,” but the eight lessons in the report itself are more useful: Understand what the system can do and where it is applied. You don’t have to compute gradients to break an AI system. AI red teaming is not safety benchmarking. Automation can help cover more of the risk landscape. The human element of AI red teaming is crucial. Responsible AI harms are pervasive but difficult to measure. LLMs amplify existing security risks and introduce new ones...
News URL
https://www.schneier.com/blog/archives/2025/02/on-generative-ai-security.html
Related news
- Innovation vs. security: Managing shadow AI risks (source)
- AI threats and workforce shortages put pressure on security leaders (source)
- How AI and automation are reshaping security leadership (source)
- Enterprises walk a tightrope between AI innovation and security (source)
- AI agents swarm Microsoft Security Copilot (source)
- How AI agents could undermine computing infrastructure security (source)
- AI-Powered SaaS Security: Keeping Pace with an Expanding Attack Surface (source)
- After Detecting 30B Phishing Attempts, Microsoft Adds Even More AI to Its Security Copilot (source)
- Week in review: Chrome sandbox escape 0-day fixed, Microsoft adds new AI agents to Security Copilot (source)
- Generative AI Is reshaping financial fraud. Can security keep up? (source)