Security News > 2023 > May > ChatGPT and other AI-themed lures used to deliver malicious software
"Since the beginning of 2023 until the end of April, out of 13,296 new domains created related to ChatGPT or OpenAI, 1 out of every 25 new domains were either malicious or potentially malicious," Check Point researchers have shared on Tuesday.
On Wednesday, Meta said that, since March 2023, they've blocked 1,000+ malicious links leveraging ChatGPT as a lure from being shared across their technologies.
Impersonating ChatGPT. Threat actors generally disguise malware within innocuous-looking files and offer non-existent ChatGPT desktop and mobile apps or browser extensions available in official app stores.
"To target businesses, malicious groups often first go after the personal accounts of people who manage or are connected to business pages and advertising accounts," Nathaniel Gleicher, Head of Security Policy and Ryan Victory, Malware Discovery and Detection Engineer at Meta, explained.
"Threat actors may design their malware to target a particular online platform, including building in more sophisticated forms of account compromise than what you'd typically expect from run-of-the-mill malware. For example, we've seen malware families that can attempt to evade two-factor authentication or have the ability to automatically scan for and detect connections between the compromised account and business accounts it might be linked to."
"We've seen them use cloaking in an attempt to circumvent automated ad review systems, and leverage popular marketing tools like link-shorteners to disguise the ultimate destination of these links. Many of them also changed their lures to other popular themes like Google's Bard and TikTok marketing support. Some of these campaigns, after we blocked malicious links to file-sharing and site hosting platforms, began targeting smaller services, such as Buy Me a Coffee - a service used by creators to accept support from their audiences - to host and deliver malware."
News URL
https://www.helpnetsecurity.com/2023/05/04/malicious-chatgpt/