Security News > 2024 > January > GCHQ's NCSC warns of 'realistic possibility' AI will help state-backed malware evade detection
An article published today by the UK National Cyber Security Centre suggests there is a "Realistic possibility" that by 2025, the most sophisticated attackers' tools will improve markedly thanks to AI models informed by data describing successful cyber-hits.
At the lower end, cyber criminals who employ social engineering are expected to enjoy a significant boost thanks to the wide-scale uptake of consumer-grade generative AI tools such as ChatGPT, Google Bard, and Microsoft Copilot.
"Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing, and coding. This trend will almost certainly continue to 2025 and beyond," the report states.
"Phishing, typically aimed either at delivering malware or stealing password information, plays an important role in providing the initial network accesses that cyber criminals need to carry out ransomware attacks or other cyber crime. It is therefore likely that cyber criminal use of available AI models to improve access will contribute to the global ransomware threat in the near term."
"The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term."
"As the NCSC does all it can to ensure AI systems are secure by design, we urge organizations and individuals to follow our ransomware and cyber security hygiene advice to strengthen their defenses and boost their resilience to cyber attacks."
News URL
https://go.theregister.com/feed/www.theregister.com/2024/01/24/ncsc/