Security News > 2024 > March > Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

2024-03-13 10:14
Google's Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves
News URL
https://thehackernews.com/2024/03/researchers-highlight-googles-gemini-ai.html
Related news
- Google: Over 57 Nation-State Threat Groups Using AI for Cyber Operations (source)
- The sixth sense of cybersecurity: How AI spots threats before they strike (source)
- How AI and deepfakes are redefining social engineering threats (source)
- Google Project Zero Researcher Uncovers Zero-Click Exploit Targeting Samsung Devices (source)
- Google Cloud Researchers Uncover Flaws in Rsync File Synchronization Tool (source)
- 5 Emerging AI Threats Australian Cyber Pros Must Watch in 2025 (source)
- Google says hackers abuse Gemini AI to empower their attacks (source)
- AI-Powered Social Engineering: Reinvented Threats (source)
- Google Confirms Android SafetyCore Enables AI-Powered On-Device Content Classification (source)
- Google Chrome's AI-powered security feature rolls out to everyone (source)