Security News > 2021 > July > Should we use AI in cybersecurity? Yes, but with caution and human help

For AI to be effective, the technology needs access to data, including sensitive internal documents and customer information.
AI technology has limitations that stem from any one of the following: a lack of system resources, insufficient computing power, poorly defined algorithms, poorly implemented algorithms or weak rules and definitions.
The trick, according to Banks, is striking a balance between AI and human input.
Banks said critical decisions, especially those regarding users, should be entrusted to a human analyst who has the final say in how to proceed or what to change.
Banks, to make his case for human intervention and control of AI processes, used a physical-security example: automatic security gates to restrict unauthorized traffic.
Banks' argument is not whether AI technology should be deployed or not.
News URL
Related news
- Cybersecurity in the AI Era: Evolve Faster Than the Threats or Get Left Behind (source)
- Strategic AI readiness for cybersecurity: From hype to reality (source)
- 13 core principles to strengthen AI cybersecurity (source)
- The future of AI in cybersecurity in a word: Optimistic (source)
- AI and automation shift the cybersecurity balance toward attackers (source)
- How agentic AI and non-human identities are transforming cybersecurity (source)
- AI vs AI: How cybersecurity pros can use criminals’ tools against them (source)
- AI hallucinations and their risk to cybersecurity operations (source)
- Adversarial AI: The new frontier in financial cybersecurity (source)