Security News > 2024 > March > Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

2024-03-13 10:14
Google's Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves
News URL
https://thehackernews.com/2024/03/researchers-highlight-googles-gemini-ai.html
Related news
- AI threats and workforce shortages put pressure on security leaders (source)
- Google paid $12 million in bug bounties last year to security researchers (source)
- Google DeepMind Proposes AI ‘Monitors’ to Police Hyperintelligent Models (source)
- AI Threats Are Evolving Fast — Learn Practical Defense Tactics in this Expert Webinar (source)
- Inside the AI-driven threat landscape (source)
- Who's calling? The threat of AI-powered vishing attacks (source)
- Google takes on Cursor with Firebase Studio, its AI builder for vibe coding (source)
- Google's AI video generator Veo 2 is rolling out on AI Studio (source)
- Cybersecurity in the AI Era: Evolve Faster Than the Threats or Get Left Behind (source)
- Google blocked over 5 billion ads in 2024 amid rise in AI-powered scams (source)