Security News > 2023 > August > Cybercriminals train AI chatbots for phishing, malware attacks

Cybercriminals train AI chatbots for phishing, malware attacks
2023-08-01 14:08

In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google's AI experiment, Bard.

Both AI-powered bots are the work of the same individual, who appears to be deep in the game of providing chatbots trained specifically for malicious purposes ranging from phishing and social engineering, to exploiting vulnerabilities and creating malware.

An investigation from researchers at cybersecurity company SlashNext, reveals that CanadianKingpin12 is actively training new chatbots using unrestricted data sets sourced from the dark web or basing them on sophisticated large language models developed for fighting cybercrime.

No matter the origin of DarkBERT and the validity of the threat actor's claims, the trend of using generative AI chatbots is growing and the adoption rate is likely to increase, too, as it can provide an easy solution for less capable threat actors or for those that want to expand operations to other regions and lack the language skills.

With hackers already having access to two such tools that can assist with executing advanced social engineering attacks and their development in less than a month, "Underscores the significant influence of malicious AI on the cybersecurity and cybercrime landscape," SlashNext researchers believe.

Put AI to use, from chatbots to robots with this training bundle deal.


News URL

https://www.bleepingcomputer.com/news/security/cybercriminals-train-ai-chatbots-for-phishing-malware-attacks/