Security News

A.I. could advance the public good, not private profit, and bolster democracy instead of undermining it. An A.I. built for public benefit could be tailor-made for those use cases where technology can best help democracy.

There is, of course, also a darker side to generative AI which researchers have been busily investigating since ChatGPT's public launch on the GPT-3 natural language large language model last November. Heinemeyer raises the important issue of measurement - how can we quantify what, if any, effect AI is having on cyberattacks beyond speculation and inference? On this, normal measurements such as the number of emails created, or their links or attachments, are a blunt tool.

Google's cloud division is following in the footsteps of Microsoft with the launch of Security AI Workbench that leverages generative AI models to gain better visibility into the threat landscape.Users, like with Microsoft's GPT-4-based Security Copilot, can "Conversationally search, analyze, and investigate security data" with an aim to reduce mean time-to-respond as well as quickly determine the full scope of events.

The report found that a majority of modern phishing attacks rely on stolen credentials and outlined the growing threat from Adversary-in-the-Middle attacks, increased use of the InterPlanetary File System, as well as reliance on phishing kits sourced from black markets and AI tools like ChatGPT. "Phishing remains one of the most prevalent threat vectors cybercriminals utilize to breach global organizations. Year-over-year, we continue to see an increase in the number of phishing attacks which are becoming more sophisticated in nature. Threat actors are leveraging phishing kits & AI tools to launch highly effective e-mail, SMiShing, and Vishing campaigns at scale"," said Deepen Desai, Global CISO and Head of Security, Zscaler. "AitM attacks supported by growth in Phishing-as-a-Service have allowed attackers to bypass traditional security models, including multi-factor authentication. To protect their environment, organizations should adopt a zero trust architecture to significantly minimize the attack surface, prevent compromise, and reduce the blast radius in case of a successful attack," added Desai.

The web giant's announcement of the resulting new features - marketed under the Google Cloud Security AI Workbench umbrella brand - is pretty long winded, so we thought we'd ask its Bard chat bot to summarize it all. Google Cloud Security AI Workbench is a new platform that uses generative AI to help organizations secure their cloud environments.

VirusTotal announced on Monday the launch of a new artificial intelligence-based code analysis feature named Code Insight.The new feature is powered by the Google Cloud Security AI Workbench introduced at the RSA Conference 2023 and which uses the Sec-PaLM large language model specifically fine-tuned for security use cases.

Balancing cybersecurity with business priorities: Advice for BoardsIn this Help Net Security interview, Alicja Cade, Director, Financial Services, Office of the CISO, Google Cloud, offers insights on how asking the right questions can help improve cyber performance and readiness, advance responsible AI practices, and balance the need for cybersecurity with other business priorities. 5 free online cybersecurity resources for small businessesThis article will explore five free resources that small companies can leverage to improve their cybersecurity posture without breaking the bank.

A growing reliance on AI and ML. Among the key findings in GitLab's report was the fact that AI/ML adoption in software development and security workflows continues to accelerate, with 62% of software developers using AI/ML to check code - up from 51% in 2022 - while 53% are using bots in the testing process, compared to 39% last year. In GitLab's 2022 Global DevSecOps Report, 54% of security respondents said they used two to five tools in their workflow, while 35% reported using six to 10; in 2023, these figures were 42% and 43%, respectively.

Sponsored Feature For some time now, alerts concerning the utilisation of AI by cybercriminals have been sounded in specialist and mainstream media alike - with the set-to between AI-armed attackers and AI-protected defenders envisaged in vivid gladiatorial terms. While its success rate gathers pace, AI in cybersecurity must transition beyond outdated perceptions that might prevent it from gaining the mainstream adoption critical for organisations to protect themselves against weaponised AI offensives when they kick-off at scale.

Across all BEC attacks seen over the past year, 57% relied on language as the main attack vector to get them in front of unsuspecting employees, according to Armorblox. Language remains the main attack vector in BEC attacks.