Security News
The actual number of people exposed to political and other deepfakes is expected to be much higher given many Americans are not able to decipher what is real versus fake, thanks to the sophistication of AI technologies. "It's not only adversarial governments creating deepfakes this election season, it is now something anyone can do in an afternoon. The tools to create cloned audio and deepfake video are readily available and take only a few hours to master, and it takes just seconds to convince you that it's all real. The ease with which AI can manipulate voices and visuals raises critical questions about the authenticity of content, particularly during a critical election year. In many ways, democracy is on the ballot this year thanks to AI," said Steve Grobman, McAfee's CTO. In a world where AI-generated content is widely available and capable of creating realistic visual and audio content, seeing is no longer believing.
Recent cybercriminal campaigns use voice cloning technology to replicate the speech tone and patterns of celebrities such as Elon Musk, Mr. Beast Tiger Woods, and others and use them for endorsing fake contests, gambling, and investment opportunities. In this Help Net Security video, Bogdan Botezatu, Director of Threat Research and Reporting at Bitdefender, discusses the growing trend of celebrity audio deepfakes.
LastPass revealed this week that threat actors targeted one of its employees in a voice phishing attack, using deepfake audio to impersonate Karim Toubba, the company's Chief Executive Officer. While 25% of people have been on the receiving end of an AI voice impersonation scam or know someone who has, according to a recent global study, the LastPass employee didn't fall for it because the attacker used WhatsApp, which is a very uncommon business channel.
AI deepfakes were not on the risk radar of organisations just a short time ago, but in 2024, they are rising up the ranks. Aon's Global Risk Management Survey, for example, does not mention it, though organisations are concerned about business interruption or damage to their brand and reputation, which could be caused by AI. Huber said the risk of AI deepfakes is still emergent, and it is morphing as change in AI happens at a fast rate.
Your profile can be used to present content that appears more relevant based on your possible interests, such as by adapting the order in which content is shown to you, so that it is even easier for you to find content that matches your interests. Content presented to you on this service can be based on your content personalisation profiles, which can reflect your activity on this or other services, possible interests and personal aspects.
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to...
A Chinese-speaking threat actor codenamed GoldFactory has been attributed to the development of highly sophisticated banking trojans, including a previously undocumented iOS malware called...
B.J. Herbison February 5, 2024 11:36 AM. Was the call recorded? On the call we have a bunch of scammers and one person who says "The deepfakes were great, I was fooled." and sends the money. The "Worried about a phishing email" might be just posturing.
Cyber attacks using AI-generated deepfakes to bypass facial biometrics security will lead a third of organizations to doubt the adequacy of identity verification and authentication tools as standalone protections. Remote account recovery, for example, might rely on an image of the individual's face to unlock security.
Cybercriminals have Canada in the crosshairs, with five Ontario hospitals and a fresh Spamoflague disinformation campaign targeting "Dozens" of Canadian government officials, including the PM. The cyberattack against five southern Ontario hospitals has shut down IT systems, forcing them to cancel patient appointments over "The next few days," according to service provider TransForm. On Monday, the services org posted an alert saying that its member hospitals and Windsor-Essex Hospice were experiencing a systems outage, which included email.