Security News
No, your CEO is not on Teams asking you to transfer money Deepfakes are coming for your brand, bank accounts, and corporate IP, according to a warning from US law enforcement and cyber agencies.…
A new report from Forrester is cautioning enterprises to be on the lookout for five deepfake scams that can wreak havoc. The deepfake scams are fraud, stock price manipulation, reputation and brand, employee experience and HR, and amplification.
The agency said there has been an "Uptick" in reports since April of deepfakes being used in sextortion scams, with the images or videos being shared online to harass the victims with demands for money, gift cards, or other payments. In the advisory, the FBI noted the rapid advancements in AI technologies and increased availability of tools that allow creation of deepfake material.
67% of consumers are aware of generative AI technologies but they overestimate their ability to detect a deepfake video, according to Jumio. Awareness of generative AI and deepfakes among consumers is high - 52% of respondents believe they could detect a deepfake video.
The rise of AI-generated identity fraud like deepfakes is alarming, with 37% of organizations experiencing voice fraud and 29% falling victim to deepfake videos, according to a survey by Regula. In this Help Net Security video, Henry Patishman, Executive VP of Identity Verification Solutions at Regula, illustrates how increasing accessibility of AI technology for creating deepfakes makes the risks mount, posing a significant challenge for businesses and individuals alike.
"AI-generated fake identities can be difficult for humans to detect, unless they are specially trained to do so. While neural networks may be useful in detecting deepfakes, they should be used in conjunction with other antifraud measures that focus on physical and dynamic parameters, such as face liveness checks, document liveness checks via optically variable security elements, etc.," says Ihar Kliashchou, CTO at Regula. At the same time, advanced identity fraud is not only about AI-generated fakes.
The term "Deepfake" is used for photo, video or audio content that has been manipulated to make it seem that the subject is doing or saying something they never did or said. This content is created by using AI and machine learning techniques.
Panic over the risk of deepfake scams is completely overblown, according to a senior security adviser for UK-based infosec company Sophos. "The thing with deepfakes is that we aren't seeing a lot of it," Sophos researcher John Shier told El Reg last week.
Specifically, we apply fluid dynamics to estimate the arrangement of the human vocal tract during speech generation and show that deepfakes often model impossible or highly-unlikely anatomical arrangements. The first step in differentiating speech produced by humans from speech generated by deepfakes is understanding how to acoustically model the vocal tract.
Cyber criminals are taking advantage of this easy access to resources, and using deepfakes to build on today's crime techniques, such as business email compromise, to make off with even more money, according to Trend Micro researchers. Specifically, corporations need to worry about deepfakes, we're told, as criminals begin using them to create fake individuals, such as job seekers to scam their way into roles, or impersonate executives on video calls to hoodwink employees into transferring company funds or data.