Security News > 2024 > October > Researchers Uncover Vulnerabilities in Open-Source AI and ML Models
2024-10-29 13:00
A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft. The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part of Protect AI's Huntr bug bounty platform. The most severe of the
News URL
https://thehackernews.com/2024/10/researchers-uncover-vulnerabilities-in.html
Related news
- Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects (source)
- Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI (source)
- AIs Discovering Vulnerabilities (source)
- AI Industry is Trying to Subvert the Definition of “Open Source AI” (source)
- Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform (source)
- Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks (source)