Security News
Humans are still better at crafting phishing emails compared to AI, but not by far and likely not for long, according to research conducted by IBM X-Force Red. Creating phishing emails: Humans vs. AI. The researchers wanted to see whether ChatGPT is as capable of writing a "Good" phishing email as attackers are.
Generative AI is likely behind the increases in both the volume and sophistication of email attacks that organizations have experienced in the past few months, and it's still early days, according to Abnormal Security. Their leading worry is the increased sophistication of email attacks that generative AI will make possible - particularly, the fact that generative AI will help attackers craft highly specific and personalized email attacks based on publicly available information.
Hacker Stephanie "Snow" Carruthers and her team found phishing emails written by security researchers saw a 3% better click rate than phishing emails written by ChatGPT. An IBM X-Force research project led by Chief People Hacker Stephanie "Snow" Carruthers showed that phishing emails written by humans have a 3% better click rate than phishing emails written by ChatGPT. The research project was performed at one global healthcare company based in Canada. In order to get ChatGPT to write an email that lured someone into clicking a malicious link, the IBM researchers had to prompt the LLM. They asked ChatGPT to draft a persuasive email taking into account the top areas of concern for employees in their industry, which in this case was healthcare.
As businesses scramble to take the lead in operationalizing AI-enabled interfaces, ransomware actors will use it to scale their operations, widen their profit margins, and increase their likelihood of pulling off successful attacks. Researchers have charted a 37% rise in ransomware incidents in 2023 in the Zscaler cloud, a triple-digit increase in double-extortion tactics across numerous industries, and an overall surge in sector-specific attacks targeting industries like manufacturing.
Copilotization of all things continues... as helper offers incident reports to share with the boss and more Microsoft is opening up the early access program for its flagship cybersecurity AI...
"Security Copilot is an AI assistant for security teams that builds on the latest in large language models and harnesses Microsoft's security expertise and global threat intelligence to help security teams outpace their adversaries," said Vasu Jakkal, corporate vice president, security, compliance, identity, and management at Microsoft. Available in private preview since March 2023, Security Copilot allows security analysts to submit prompts in natural language, much like ChatGPT, to get actionable responses and simplify threat hunting.
With the record-setting growth of consumer-focused AI productivity tools like ChatGPT, artificial intelligence—formerly the realm of data science and engineering teams—has become a resource...
If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns off the rails. Future uses of AI by campaigns go far beyond deepfaked images.
In contrast, South American organizations experience attack rates of merely 2% largely due to their practice of verifying IDs against government databases, creating a more formidable barrier against fraud. The data analysis underscores a significant and notable discrepancy in attack modes based on document verification compared to attacks using selfies.
Recently, the cybersecurity landscape has been confronted with a daunting new reality – the rise of malicious Generative AI, like FraudGPT and WormGPT. These rogue creations, lurking in the dark...