Security News

ChatGPT: Productivity tool, great for writing poems, and a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game. Finding Vulnerabilities - Attackers can prompt ChatGPT about potential vulnerabilities in websites, systems, APIs, and other network components.

Cloud computing company VMware rolled out new cloud, AI, edge and data services at VMware Explore Barcelona 2023 on November 7. "We truly believe private AI will become the default architecture for enabling generative AI in the enterprise," said Chris Wolf, vice president of VMware AI Labs, in a pre-briefing for the media on November 2.

Consumers are concerned about their privacy with AI. Cisco discovered that 60% had lost trust in organizations due to their AI use. In this Help Net Security video, Robert Waitman, Director of Cisco's Privacy Center of Excellence, discusses consumers' perceptions and behaviors on data privacy.

In this Help Net Security interview, Sarah Pearce, Partner at Hunton Andrews Kurth, offers insights into the evolving landscape of AI legislation and its global impact. We're observing a global shift towards AI-specific legislation.

Microsoft has made fresh commitments to harden the security of its software and cloud services after a year in which numerous members of the global infosec community criticized the company's tech defenses. The long and short of it is that Microsoft is pushing the big AI button a few more times, more deeply embedding the tech throughout its security operations and products.

Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes. As the threat landscape evolves and generative AI is added to the toolsets available to...

Global leaders from 28 nations have gathered in the U.K. for an influential summit dedicated to AI regulation and safety. Day one of the AI Safety Summit culminated in the signing of the "Landmark" Bletchley Declaration on AI Safety, which commits 28 participating countries - including the U.K., U.S. and China - to jointly manage and mitigate risks from artificial intelligence while ensuring safe and responsible development and deployment.

The code of conduct provides guidelines for AI regulation across G7 countries and includes cybersecurity considerations and international standards. The Group of Seven countries have created a voluntary AI code of conduct, released on October 30, regarding the use of advanced artificial intelligence.

Google joins OpenAI and Microsoft in rewarding AI bug hunts. Google expanded its Vulnerability Rewards Program to include bugs and vulnerabilities that could be found in generative AI. Specifically, Google is looking for bug hunters for its own generative AI, products such as Google Bard, which is available in many countries, or Google Cloud's Contact Center AI, Agent Assist.

Once they gain access to a healthcare organization's system, cybercriminals can utilize AI to analyze large datasets, allowing them to gather valuable data, such as patients' personal identifiable information, for identity theft, fraud, or ransomware attacks. AI-powered attacks can exploit vulnerabilities in medical devices, compromise electronic health records, or disrupt critical healthcare services - forcing organizations to quickly revert to paper systems and human intervention for equipment monitoring or record exchanges.