Security News

Humans are still better than AI at crafting phishing emails, but for how long?
2023-10-26 12:14

Humans are still better at crafting phishing emails compared to AI, but not by far and likely not for long, according to research conducted by IBM X-Force Red. Creating phishing emails: Humans vs. AI. The researchers wanted to see whether ChatGPT is as capable of writing a "Good" phishing email as attackers are.

Security leaders have good reasons to fear AI-generated attacks
2023-10-25 03:30

Generative AI is likely behind the increases in both the volume and sophistication of email attacks that organizations have experienced in the past few months, and it's still early days, according to Abnormal Security. Their leading worry is the increased sophistication of email attacks that generative AI will make possible - particularly, the fact that generative AI will help attackers craft highly specific and personalized email attacks based on publicly available information.

Generative AI Can Write Phishing Emails, But Humans Are Better At It, IBM X-Force Finds
2023-10-24 11:00

Hacker Stephanie "Snow" Carruthers and her team found phishing emails written by security researchers saw a 3% better click rate than phishing emails written by ChatGPT. An IBM X-Force research project led by Chief People Hacker Stephanie "Snow" Carruthers showed that phishing emails written by humans have a 3% better click rate than phishing emails written by ChatGPT. The research project was performed at one global healthcare company based in Canada. In order to get ChatGPT to write an email that lured someone into clicking a malicious link, the IBM researchers had to prompt the LLM. They asked ChatGPT to draft a persuasive email taking into account the top areas of concern for employees in their industry, which in this case was healthcare.

Bracing for AI-enabled ransomware and cyber extortion attacks
2023-10-24 04:30

As businesses scramble to take the lead in operationalizing AI-enabled interfaces, ransomware actors will use it to scale their operations, widen their profit margins, and increase their likelihood of pulling off successful attacks. Researchers have charted a 37% rise in ransomware incidents in 2023 in the Zscaler cloud, a triple-digit increase in double-extortion tactics across numerous industries, and an overall surge in sector-specific attacks targeting industries like manufacturing.

Microsoft opens early access to AI assistant for infosec, Security Copilot
2023-10-23 13:00

Copilotization of all things continues... as helper offers incident reports to share with the boss and more Microsoft is opening up the early access program for its flagship cybersecurity AI...

Microsoft announces wider availability of AI-powered Security Copilot
2023-10-23 11:53

"Security Copilot is an AI assistant for security teams that builds on the latest in large language models and harnesses Microsoft's security expertise and global threat intelligence to help security teams outpace their adversaries," said Vasu Jakkal, corporate vice president, security, compliance, identity, and management at Microsoft. Available in private preview since March 2023, Security Copilot allows security analysts to submit prompts in natural language, much like ChatGPT, to get actionable responses and simplify threat hunting.

Who's Experimenting with AI Tools in Your Organization?
2023-10-23 11:34

With the record-setting growth of consumer-focused AI productivity tools like ChatGPT, artificial intelligence—formerly the realm of data science and engineering teams—has become a resource...

AI and US Election Rules
2023-10-20 11:10

If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns off the rails. Future uses of AI by campaigns go far beyond deepfaked images.

Generative AI merges with intelligent malware, threat level rises
2023-10-18 03:00

In contrast, South American organizations experience attack rates of merely 2% largely due to their practice of verifying IDs against government databases, creating a more formidable barrier against fraud. The data analysis underscores a significant and notable discrepancy in attack modes based on document verification compared to attacks using selfies.

Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge
2023-10-17 10:17

Recently, the cybersecurity landscape has been confronted with a daunting new reality – the rise of malicious Generative AI, like FraudGPT and WormGPT. These rogue creations, lurking in the dark...