Security News > 2024 > October > Researchers Uncover Vulnerabilities in Open-Source AI and ML Models
2024-10-29 13:00
A little over three dozen security vulnerabilities have been disclosed in various open-source artificial intelligence (AI) and machine learning (ML) models, some of which could lead to remote code execution and information theft. The flaws, identified in tools like ChuanhuChatGPT, Lunary, and LocalAI, have been reported as part of Protect AI's Huntr bug bounty platform. The most severe of the
News URL
https://thehackernews.com/2024/10/researchers-uncover-vulnerabilities-in.html
Related news
- Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects (source)
- Researchers Uncover Major Security Vulnerabilities in Industrial MMS Protocol Libraries (source)
- Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI Models (source)
- Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security (source)
- AIs Discovering Vulnerabilities (source)
- AI Industry is Trying to Subvert the Definition of “Open Source AI” (source)
- Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform (source)