Security News > 2023 > October > F5 Warns Australian IT of Social Engineering Risk Escalation Due to Generative AI

F5 Warns Australian IT of Social Engineering Risk Escalation Due to Generative AI
2023-10-11 09:32

Experts from security firm F5 have argued that cyber criminals are unlikely to send new armies of generative AI-driven bots into battle with enterprise security defences in the near future because proven social engineering attack methods will be easier to mount using generative AI. The release of generative AI tools, such as ChatGPT, have caused widespread fears that democratization of powerful large language models could help bad actors around the world supercharge their efforts to hack businesses and steal or hold sensitive data hostage.

F5, a multicloud security and application delivery provider, tells TechRepublic that generative AI will result in a growth in social engineering attack volumes and capacity in Australia, as threat actors deliver a higher volume of better quality attacks to trick IT gatekeepers.

"These attacks could be well structured in any language - it is very impressive. So I worry about social engineering and phishing attacks."

Australian IT teams can expect to be on the receiving end of social engineering attack growth.

F5 said the main counter to changing bad actor techniques and capabilities will be education to ensure employees are made aware of increasing attack sophistication due to AI. "Scams that trick employees into doing something - like downloading a new version of a corporate VPN client or tricking accounts payable to pay some nonexistent merchant - will continue to happen," Woods said.

Bad actors will choose social engineering over bot attacks.


News URL

https://www.techrepublic.com/article/f5-generative-ai-cybersecurity-interview/

Related vendor

VENDOR LAST 12M #/PRODUCTS LOW MEDIUM HIGH CRITICAL TOTAL VULNS
F5 141 6 267 399 64 736