Security News > 2023 > October > AI threat landscape: Model theft and inference attacks emerge as top concerns

Enterprises will invest nearly $16 billion worldwide on GenAI solutions in 2023, according to IDC. In this Help Net Security interview, Guy Guzner, CEO at Savvy, discusses the challenges and opportunities presented by in-house AI models, the security landscape surrounding them, and the future of AI cybersecurity.
Organizations developing in-house AI models have a distinct advantage when it comes to critical security concerns.
Model theft, inference attacks, and data poisoning are some of the potential attacks against AI models highlighted by analysts.
Of the highlighted attacks, model theft and inference attacks are particularly menacing.
Model theft allows malicious actors to steal proprietary models, essentially providing them with a shortcut to valuable AI solutions without the effort of development.
On the other hand, inference attacks exploit the responses of the AI model to deduce sensitive information from seemingly harmless queries.
News URL
https://www.helpnetsecurity.com/2023/10/30/guy-guzner-savvy-in-house-ai-models/
Related news
- Threat Actors Exploit ClickFix to Deploy NetSupport RAT in Latest Cyber Attacks (source)
- CrowdStrike Security Report: Generative AI Powers Social Engineering Attacks (source)
- How New AI Agents Will Transform Credential Stuffing Attacks (source)
- YouTube warns of AI-generated video of its CEO used in phishing attacks (source)
- Outsmarting Cyber Threats with Attack Graphs (source)
- AI threats and workforce shortages put pressure on security leaders (source)
- MINJA sneak attack poisons AI models for other chatbot users (source)
- New ‘Rules File Backdoor’ Attack Lets Hackers Inject Malicious Code via AI Code Editors (source)
- ⚡ THN Weekly Recap: GitHub Supply Chain Attack, AI Malware, BYOVD Tactics, and More (source)
- Hidden Threats: How Microsoft 365 Backups Store Risks for Future Attacks (source)