Security News

We will discuss how organizations can proactively improve their security posture by embracing technology and implementing best practices to defend against these advanced threats. One of the primary ways web applications can be targeted is through vulnerability exploitation searches, where attackers focus on known vulnerabilities in web servers, databases, content management systems, and third-party libraries.

Generative AI has captured the imagination of millions worldwide, largely driven by the recent success of ChatGPT, the text-generation chatbot. Our new research showed that globally, 67% of consumers have heard of generative AI technologies, and in some markets, like Singapore, almost half have used an application that uses them.

As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. Many AI products are deployed without institutions fully understanding the security risks they pose.

Are we moving too fast with AI? This is a central question both inside and outside the tech industry, given the recent tsunami of attention paid to ChatGPT and other generative AI tools. Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?

A.I. could advance the public good, not private profit, and bolster democracy instead of undermining it. An A.I. built for public benefit could be tailor-made for those use cases where technology can best help democracy.

There is, of course, also a darker side to generative AI which researchers have been busily investigating since ChatGPT's public launch on the GPT-3 natural language large language model last November. Heinemeyer raises the important issue of measurement - how can we quantify what, if any, effect AI is having on cyberattacks beyond speculation and inference? On this, normal measurements such as the number of emails created, or their links or attachments, are a blunt tool.

Google's cloud division is following in the footsteps of Microsoft with the launch of Security AI Workbench that leverages generative AI models to gain better visibility into the threat landscape.Users, like with Microsoft's GPT-4-based Security Copilot, can "Conversationally search, analyze, and investigate security data" with an aim to reduce mean time-to-respond as well as quickly determine the full scope of events.

The report found that a majority of modern phishing attacks rely on stolen credentials and outlined the growing threat from Adversary-in-the-Middle attacks, increased use of the InterPlanetary File System, as well as reliance on phishing kits sourced from black markets and AI tools like ChatGPT. "Phishing remains one of the most prevalent threat vectors cybercriminals utilize to breach global organizations. Year-over-year, we continue to see an increase in the number of phishing attacks which are becoming more sophisticated in nature. Threat actors are leveraging phishing kits & AI tools to launch highly effective e-mail, SMiShing, and Vishing campaigns at scale"," said Deepen Desai, Global CISO and Head of Security, Zscaler. "AitM attacks supported by growth in Phishing-as-a-Service have allowed attackers to bypass traditional security models, including multi-factor authentication. To protect their environment, organizations should adopt a zero trust architecture to significantly minimize the attack surface, prevent compromise, and reduce the blast radius in case of a successful attack," added Desai.

The web giant's announcement of the resulting new features - marketed under the Google Cloud Security AI Workbench umbrella brand - is pretty long winded, so we thought we'd ask its Bard chat bot to summarize it all. Google Cloud Security AI Workbench is a new platform that uses generative AI to help organizations secure their cloud environments.

VirusTotal announced on Monday the launch of a new artificial intelligence-based code analysis feature named Code Insight.The new feature is powered by the Google Cloud Security AI Workbench introduced at the RSA Conference 2023 and which uses the Sec-PaLM large language model specifically fine-tuned for security use cases.