Security News > 2024 > August > AI security 2024: Key insights for staying ahead of threats

In this Help Net Security interview, Kojin Oshiba, co-founder of Robust Intelligence, discusses his journey from academic research to addressing AI security challenges in the industry.
What motivated you to specialize in the security aspects of AI systems?
Similar to a WAF, this requires an AI firewall that detects safety and security threats.
In lieu of cohesive regulation, various standards bodies have issued guidelines and frameworks on AI security, including NIST, MITRE, OWASP, US AI Safety Institute, and UK AI Safety Institute.
While there has been a flurry of proposed AI safety and security bills, only a few have been voted into law, most namely the EU AI Act.
AI security will need to evolve to identify and mitigate novel AI attacks that will target connected systems.
News URL
https://www.helpnetsecurity.com/2024/08/08/kojin-oshiba-robust-intelligence-ai-systems-security/
Related news
- AI threats and workforce shortages put pressure on security leaders (source)
- Taiwan Bans DeepSeek AI Over National Security Concerns, Citing Data Leakage Risks (source)
- On Generative AI Security (source)
- AI-Powered Social Engineering: Reinvented Threats (source)
- AI and Security - A New Puzzle to Figure Out (source)
- Inconsistent security strategies fuel third-party threats (source)
- Google Chrome's AI-powered security feature rolls out to everyone (source)
- CrowdStrike Security Report: Generative AI Powers Social Engineering Attacks (source)
- Innovation vs. security: Managing shadow AI risks (source)
- How AI and automation are reshaping security leadership (source)