Security News > 2024 > March > Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

2024-03-13 10:14
Google's Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves
News URL
https://thehackernews.com/2024/03/researchers-highlight-googles-gemini-ai.html
Related news
- AI-Powered Social Engineering: Reinvented Threats (source)
- Google Confirms Android SafetyCore Enables AI-Powered On-Device Content Classification (source)
- Google Chrome's AI-powered security feature rolls out to everyone (source)
- Twin Google flaws allowed researcher to get from YouTube ID to Gmail address in a few easy steps (source)
- Google Cloud KMS Adds Quantum-Safe Digital Signatures to Defend Against Future Threats (source)
- Google expands Android AI scam detection to more Pixel devices (source)
- Google Rolls Out AI Scam Detection for Android to Combat Conversational Fraud (source)
- AI threats and workforce shortages put pressure on security leaders (source)
- Google paid $12 million in bug bounties last year to security researchers (source)
- AI Threats Are Evolving Fast — Learn Practical Defense Tactics in this Expert Webinar (source)