Security News

In this Help Net Security interview, Sarah Pearce, Partner at Hunton Andrews Kurth, offers insights into the evolving landscape of AI legislation and its global impact. We're observing a global shift towards AI-specific legislation.

Microsoft has made fresh commitments to harden the security of its software and cloud services after a year in which numerous members of the global infosec community criticized the company's tech defenses. The long and short of it is that Microsoft is pushing the big AI button a few more times, more deeply embedding the tech throughout its security operations and products.

Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes. As the threat landscape evolves and generative AI is added to the toolsets available to...

Global leaders from 28 nations have gathered in the U.K. for an influential summit dedicated to AI regulation and safety. Day one of the AI Safety Summit culminated in the signing of the "Landmark" Bletchley Declaration on AI Safety, which commits 28 participating countries - including the U.K., U.S. and China - to jointly manage and mitigate risks from artificial intelligence while ensuring safe and responsible development and deployment.

The code of conduct provides guidelines for AI regulation across G7 countries and includes cybersecurity considerations and international standards. The Group of Seven countries have created a voluntary AI code of conduct, released on October 30, regarding the use of advanced artificial intelligence.

Google joins OpenAI and Microsoft in rewarding AI bug hunts. Google expanded its Vulnerability Rewards Program to include bugs and vulnerabilities that could be found in generative AI. Specifically, Google is looking for bug hunters for its own generative AI, products such as Google Bard, which is available in many countries, or Google Cloud's Contact Center AI, Agent Assist.

Once they gain access to a healthcare organization's system, cybercriminals can utilize AI to analyze large datasets, allowing them to gather valuable data, such as patients' personal identifiable information, for identity theft, fraud, or ransomware attacks. AI-powered attacks can exploit vulnerabilities in medical devices, compromise electronic health records, or disrupt critical healthcare services - forcing organizations to quickly revert to paper systems and human intervention for equipment monitoring or record exchanges.

The executive order features wide-ranging guidance on maintaining safety, civil rights and privacy within government agencies while promoting AI innovation and competition throughout the U.S. Although the executive order doesn't specify generative artificial intelligence, it was likely issued in reaction to the proliferation of generative AI, which has become a hot topic since the public release of OpenAI's ChatGPT in November 2022. Any company developing " any foundation model that poses a serious risk to national security, national economic security, or national public health and safety " must keep the U.S. government informed of their training and red team safety tests, the executive order states.

Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. Protect Americans' privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques-including ones that use cutting-edge AI and that let AI systems be trained while preserving the privacy of the training data.

Google has expanded its bug bounty program, aka Vulnerability Rewards Program, to cover threats that could arise from Google's generative AI systems. Following the voluntary commitment to the Biden-Harris Administration to develop responsible AI and manage its risks, Google has added AI-related risks to its bug bounty program, which gives recognition and compensation to ethical hackers who successfully find and disclose vulnerabilities in Google's systems.