Security News
The success of ChatGPT, a text-generation chatbot, has sparked widespread interest in generative AI among millions of people worldwide. According to Jumio's research, 67% of consumers globally are aware of generative AI technologies, and in certain markets, such as Singapore, 45% have utilized an application that employs such technologies.
Meta says it has shut down over 1,000 links related to ChatGPT that lead its users to malware, as criminals seek to profit from the current craze for generative AI. ChatGPT has quickly bagged more than 100 million users, encouraging many organizations to explore how generative AI might help them increase productivity and profit. Scammers are thinking along the same lines, offering links and other stuff related to the chat bot to draw people into malicious websites that steal their info or offer downloads laced with malware.
We will discuss how organizations can proactively improve their security posture by embracing technology and implementing best practices to defend against these advanced threats. One of the primary ways web applications can be targeted is through vulnerability exploitation searches, where attackers focus on known vulnerabilities in web servers, databases, content management systems, and third-party libraries.
Generative AI has captured the imagination of millions worldwide, largely driven by the recent success of ChatGPT, the text-generation chatbot. Our new research showed that globally, 67% of consumers have heard of generative AI technologies, and in some markets, like Singapore, almost half have used an application that uses them.
As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. Many AI products are deployed without institutions fully understanding the security risks they pose.
Are we moving too fast with AI? This is a central question both inside and outside the tech industry, given the recent tsunami of attention paid to ChatGPT and other generative AI tools. Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
A.I. could advance the public good, not private profit, and bolster democracy instead of undermining it. An A.I. built for public benefit could be tailor-made for those use cases where technology can best help democracy.
There is, of course, also a darker side to generative AI which researchers have been busily investigating since ChatGPT's public launch on the GPT-3 natural language large language model last November. Heinemeyer raises the important issue of measurement - how can we quantify what, if any, effect AI is having on cyberattacks beyond speculation and inference? On this, normal measurements such as the number of emails created, or their links or attachments, are a blunt tool.
Google's cloud division is following in the footsteps of Microsoft with the launch of Security AI Workbench that leverages generative AI models to gain better visibility into the threat landscape.Users, like with Microsoft's GPT-4-based Security Copilot, can "Conversationally search, analyze, and investigate security data" with an aim to reduce mean time-to-respond as well as quickly determine the full scope of events.
The report found that a majority of modern phishing attacks rely on stolen credentials and outlined the growing threat from Adversary-in-the-Middle attacks, increased use of the InterPlanetary File System, as well as reliance on phishing kits sourced from black markets and AI tools like ChatGPT. "Phishing remains one of the most prevalent threat vectors cybercriminals utilize to breach global organizations. Year-over-year, we continue to see an increase in the number of phishing attacks which are becoming more sophisticated in nature. Threat actors are leveraging phishing kits & AI tools to launch highly effective e-mail, SMiShing, and Vishing campaigns at scale"," said Deepen Desai, Global CISO and Head of Security, Zscaler. "AitM attacks supported by growth in Phishing-as-a-Service have allowed attackers to bypass traditional security models, including multi-factor authentication. To protect their environment, organizations should adopt a zero trust architecture to significantly minimize the attack surface, prevent compromise, and reduce the blast radius in case of a successful attack," added Desai.