Security News > 2024 > March > Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats
2024-03-13 10:14
Google's Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves
News URL
https://thehackernews.com/2024/03/researchers-highlight-googles-gemini-ai.html
Related news
- Google Chrome’s AI feature lets you quickly check website trustworthiness (source)
- Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI (source)
- MUT-1244 targeting security researchers, red teamers, and threat actors (source)
- Google says new scam protection feature in Chrome uses AI (source)
- Google Chrome uses AI to analyze pages in new scam detection feature (source)
- The sixth sense of cybersecurity: How AI spots threats before they strike (source)
- How AI and deepfakes are redefining social engineering threats (source)
- Google Project Zero Researcher Uncovers Zero-Click Exploit Targeting Samsung Devices (source)
- Google Cloud Researchers Uncover Flaws in Rsync File Synchronization Tool (source)
- 5 Emerging AI Threats Australian Cyber Pros Must Watch in 2025 (source)