Security News > 2023 > April > AI tools like ChatGPT expected to fuel BEC attacks

Across all BEC attacks seen over the past year, 57% relied on language as the main attack vector to get them in front of unsuspecting employees, according to Armorblox.
Language remains the main attack vector in BEC attacks.
Language remains the main attack vector in 4 out of 5 BEC attacks that bypassed legacy solutions in 2022.
20% of BEC attacks involved graymail or unwanted solicitation and security teams can find themselves spending upwards of 27 person hours a week manually sorting and deleting graymail across inboxes.
"Based on threats analyzed by Armorblox across our customer base of over 58,000 organizations, we see over half of email attacks targeting critical business workflows aim to exfiltrate sensitive user data. These attacks often involve bad actors infiltrating legitimate business communications to alter sensitive business information, such as assigning new routing numbers for payment requests," said DJ Sampath, CEO of Armorblox.
"These attacks use language as the primary attack vector to impersonate trusted SaaS applications, vendors, and VIPs. This only increases the critical need for organizations to augment native and legacy security layers with modern API-based solutions that use a broad set of deep learning algorithms, machine learning models, data science approaches, and natural language-based techniques to understand the content and context of communications, and protect against these targeted attacks," concluded Sampath.
News URL
https://www.helpnetsecurity.com/2023/04/17/bec-attacks-language-attack-vector/
Related news
- AI-Powered Social Engineering: Ancillary Tools and Techniques (source)
- CrowdStrike Security Report: Generative AI Powers Social Engineering Attacks (source)
- Online crime-as-a-service skyrockets with 24,000 users selling attack tools (source)
- How New AI Agents Will Transform Credential Stuffing Attacks (source)
- YouTube warns of AI-generated video of its CEO used in phishing attacks (source)
- MINJA sneak attack poisons AI models for other chatbot users (source)
- New ‘Rules File Backdoor’ Attack Lets Hackers Inject Malicious Code via AI Code Editors (source)
- How AI, corruption and digital tools fuel Europe’s criminal underworld (source)
- ⚡ THN Weekly Recap: GitHub Supply Chain Attack, AI Malware, BYOVD Tactics, and More (source)
- AI-Powered SaaS Security: Keeping Pace with an Expanding Attack Surface (source)