Security News > 2023 > October > AI threat landscape: Model theft and inference attacks emerge as top concerns
Enterprises will invest nearly $16 billion worldwide on GenAI solutions in 2023, according to IDC. In this Help Net Security interview, Guy Guzner, CEO at Savvy, discusses the challenges and opportunities presented by in-house AI models, the security landscape surrounding them, and the future of AI cybersecurity.
Organizations developing in-house AI models have a distinct advantage when it comes to critical security concerns.
Model theft, inference attacks, and data poisoning are some of the potential attacks against AI models highlighted by analysts.
Of the highlighted attacks, model theft and inference attacks are particularly menacing.
Model theft allows malicious actors to steal proprietary models, essentially providing them with a shortcut to valuable AI solutions without the effort of development.
On the other hand, inference attacks exploit the responses of the AI model to deduce sensitive information from seemingly harmless queries.
News URL
https://www.helpnetsecurity.com/2023/10/30/guy-guzner-savvy-in-house-ai-models/
Related news
- Ease the Burden with AI-Driven Threat Intelligence Reporting (source)
- Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks (source)
- 'Skeleton Key' attack unlocks the worst of AI, says Microsoft (source)
- TAG-100: New Threat Actor Uses Open-Source Tools for Widespread Attacks (source)
- SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks (source)
- Protecting AI systems from cyber threats (source)
- AI-generated deepfake attacks force companies to reassess cybersecurity (source)
- Enhancing threat detection for GenAI workloads with cloud attack emulation (source)