Security News > 2024 > August > AI security 2024: Key insights for staying ahead of threats
In this Help Net Security interview, Kojin Oshiba, co-founder of Robust Intelligence, discusses his journey from academic research to addressing AI security challenges in the industry.
What motivated you to specialize in the security aspects of AI systems?
Similar to a WAF, this requires an AI firewall that detects safety and security threats.
In lieu of cohesive regulation, various standards bodies have issued guidelines and frameworks on AI security, including NIST, MITRE, OWASP, US AI Safety Institute, and UK AI Safety Institute.
While there has been a flurry of proposed AI safety and security bills, only a few have been voted into law, most namely the EU AI Act.
AI security will need to evolve to identify and mitigate novel AI attacks that will target connected systems.
News URL
https://www.helpnetsecurity.com/2024/08/08/kojin-oshiba-robust-intelligence-ai-systems-security/
Related news
- Eliminating AI Deepfake Threats: Is Your Identity Security AI-Proof? (source)
- Businesses turn to private AI for enhanced security and data management (source)
- Obsidian Security Warns of Rising SaaS Threats to Enterprises (source)
- CIOs want a platform that combines AI, networking, and security (source)
- Generative AI in Security: Risks and Mitigation Strategies (source)
- Unlocking the value of AI-powered identity security (source)
- Can Security Experts Leverage Generative AI Without Prompt Engineering Skills? (source)
- Evolving cloud threats: Insights and recommendations (source)
- Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security (source)
- Best AI Security Tools: Top Solutions, Features & Comparisons (source)