Security News > 2024 > March > Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats
2024-03-13 10:14
Google's Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well as companies using the LLM API. The first vulnerability involves
News URL
https://thehackernews.com/2024/03/researchers-highlight-googles-gemini-ai.html
Related news
- Chinese chap charged with stealing Google’s AI datacenter secrets (source)
- Ex-Google Engineer Arrested for Stealing AI Technology Secrets for China (source)
- Google engineer caught stealing AI tech secrets for Chinese firms (source)
- Shadow AI is the latest cybersecurity threat you need to prepare for (source)
- Google's new AI search results promotes sites pushing malware, scams (source)
- Google Cloud/Cloud Security Alliance Report: IT and Security Pros Are ‘Cautiously Optimistic’ About AI (source)