Security News > 2024 > March > Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats
2024-03-13 10:14
Google's Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves
News URL
https://thehackernews.com/2024/03/researchers-highlight-googles-gemini-ai.html
Related news
- Researchers Warn of Privilege Escalation Risks in Google's Vertex AI ML Platform (source)
- Researchers Reveal 'Deceptive Delight' Method to Jailbreak AI Models (source)
- Eliminating AI Deepfake Threats: Is Your Identity Security AI-Proof? (source)
- Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security (source)
- Researchers Uncover Vulnerabilities in Open-Source AI and ML Models (source)
- How agentic AI handles the speed and volume of modern threats (source)
- Google’s AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine (source)
- Google claims Big Sleep 'first' AI to spot freshly committed security bug that fuzzing missed (source)
- AI-Assisted Attacks Top Cyber Threat For Third Consecutive Quarter, Gartner Finds (source)
- How AI will shape the next generation of cyber threats (source)