Security News
67% of consumers are aware of generative AI technologies but they overestimate their ability to detect a deepfake video, according to Jumio. Awareness of generative AI and deepfakes among consumers is high - 52% of respondents believe they could detect a deepfake video.
The rise of AI-generated identity fraud like deepfakes is alarming, with 37% of organizations experiencing voice fraud and 29% falling victim to deepfake videos, according to a survey by Regula. In this Help Net Security video, Henry Patishman, Executive VP of Identity Verification Solutions at Regula, illustrates how increasing accessibility of AI technology for creating deepfakes makes the risks mount, posing a significant challenge for businesses and individuals alike.
"AI-generated fake identities can be difficult for humans to detect, unless they are specially trained to do so. While neural networks may be useful in detecting deepfakes, they should be used in conjunction with other antifraud measures that focus on physical and dynamic parameters, such as face liveness checks, document liveness checks via optically variable security elements, etc.," says Ihar Kliashchou, CTO at Regula. At the same time, advanced identity fraud is not only about AI-generated fakes.
The term "Deepfake" is used for photo, video or audio content that has been manipulated to make it seem that the subject is doing or saying something they never did or said. This content is created by using AI and machine learning techniques.
Panic over the risk of deepfake scams is completely overblown, according to a senior security adviser for UK-based infosec company Sophos. "The thing with deepfakes is that we aren't seeing a lot of it," Sophos researcher John Shier told El Reg last week.
Specifically, we apply fluid dynamics to estimate the arrangement of the human vocal tract during speech generation and show that deepfakes often model impossible or highly-unlikely anatomical arrangements. The first step in differentiating speech produced by humans from speech generated by deepfakes is understanding how to acoustically model the vocal tract.
Cyber criminals are taking advantage of this easy access to resources, and using deepfakes to build on today's crime techniques, such as business email compromise, to make off with even more money, according to Trend Micro researchers. Specifically, corporations need to worry about deepfakes, we're told, as criminals begin using them to create fake individuals, such as job seekers to scam their way into roles, or impersonate executives on video calls to hoodwink employees into transferring company funds or data.
VMware found a quarter of all ransomware attacks included double-extortion techniques, with top methods including blackmail, data auction and name and shame The use of deepfakes also shot up this year, by 13 percent to 66 percent of respondents reporting they had featured in an attack. 65 percent of respondents noted that cyberattacks had increased since Russia invaded Ukraine and 62 percent said they'd been on the receiving end of zero-day exploits.
Deepfake attacks and cyber extortion are creating mounting risks. "In February VMware reported seeing a new type of malware deployed in one of the largest targeted attacks in history focused solely on the destruction of critical information and resources."This is part of a growing list of destructive malware deployed against Ukraine, as noted in a joint advisory the Cybersecurity and Infrastructure Security Agency and the FBI released this spring,'' the report stated.
Threat actors exchange beacons for badgers to evade endpoint securityUnidentified cyber threat actors have started using Brute Ratel C4, an adversary simulation tool similar to Cobalt Strike, to try to avoid detection by endpoint security solutions and gain a foothold on target networks, Palo Alto Networks researchers have found. Attackers are using deepfakes to snag remote IT jobsMalicious individuals are using stolen personally identifiable information and voice and video deepfakes to try to land remote IT, programming, database and software-related jobs, the FBI has warned last week.