Security News > 2023 > April > Security Risks of AI
As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users.
Many AI products are deployed without institutions fully understanding the security risks they pose.
Organizations building or deploying AI models should incorporate AI concerns into their cybersecurity functions using a risk management framework that addresses security throughout the AI system life cycle.
It will be necessary to grapple with the ways in which AI vulnerabilities are different from traditional cybersecurity bugs, but the starting point is to assume that AI security is a subset of cybersecurity and to begin applying vulnerability management practices to AI-based features.
Assessing AI vulnerabilities requires technical expertise that is distinct from the skill set of cybersecurity practitioners, and organizations should be cautioned against repurposing existing security teams without additional training and resources.
We also note that AI security researchers and practitioners should consult with those addressing AI bias.
News URL
https://www.schneier.com/blog/archives/2023/04/security-risks-of-ai.html
Related news
- CIOs want a platform that combines AI, networking, and security (source)
- Generative AI in Security: Risks and Mitigation Strategies (source)
- Unlocking the value of AI-powered identity security (source)
- Can Security Experts Leverage Generative AI Without Prompt Engineering Skills? (source)
- Eliminating AI Deepfake Threats: Is Your Identity Security AI-Proof? (source)
- Apple Opens PCC Source Code for Researchers to Identify Bugs in Cloud AI Security (source)
- Best AI Security Tools: Top Solutions, Features & Comparisons (source)
- How AI Is Changing the Cloud Security and Risk Equation (source)
- Google claims Big Sleep 'first' AI to spot freshly committed security bug that fuzzing missed (source)
- HackerOne: Nearly Half of Security Professionals Believe AI Is Risky (source)