Security News > 2023 > April > Security Risks of AI
As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users.
Many AI products are deployed without institutions fully understanding the security risks they pose.
Organizations building or deploying AI models should incorporate AI concerns into their cybersecurity functions using a risk management framework that addresses security throughout the AI system life cycle.
It will be necessary to grapple with the ways in which AI vulnerabilities are different from traditional cybersecurity bugs, but the starting point is to assume that AI security is a subset of cybersecurity and to begin applying vulnerability management practices to AI-based features.
Assessing AI vulnerabilities requires technical expertise that is distinct from the skill set of cybersecurity practitioners, and organizations should be cautioned against repurposing existing security teams without additional training and resources.
We also note that AI security researchers and practitioners should consult with those addressing AI bias.
News URL
https://www.schneier.com/blog/archives/2023/04/security-risks-of-ai.html
Related news
- AI security 2024: Key insights for staying ahead of threats (source)
- Unlock the Future of Cybersecurity: Exclusive, Next Era AI Insights and Cutting-Edge Training at SANS Network Security 2024 (source)
- The AI balancing act: Unlocking potential, dealing with security issues, complexity (source)
- AI for application security: Balancing automation with human oversight (source)
- Two-Thirds of Security Leaders Consider Banning AI-Generated Code, Report Finds (source)
- Security leaders consider banning AI coding due to security risks (source)
- Digital Maturity Key to AI Success in Australian Cyber Security (source)
- HackerOne: Nearly Half of Security Professionals Believe AI Is Risky (source)
- Generative AI Security: Getting ready for Salesforce Einstein Copilot (source)