Security News

Researchers at the University of Surrey have developed software that can assess the amount of data that an artificial intelligence system has acquired from a digital database of an organization, in response to the increasing global interest in generative AI systems. This verification software can be used as part of a company's online security protocol, helping an organisation understand whether an AI has learned too much or even accessed sensitive data.

Many sectors view AI and machine learning with mixed emotions, but for the cybersecurity industry, they present a double-edged sword. On the one hand, AI provides powerful tools for cybersecurity professionals, such as automated security processing and threat detection.

Over a thousand people, including professors and AI developers, have co-signed an open letter to all artificial intelligence labs, calling them to pause the development and training of AI systems more powerful than GPT-4 for at least six months. The letter is signed by those in the field of AI development and technology, including Elon Musk, co-founder of OpenAI, Yoshua Bengio, a prominent AI professor and founder of Mila, Steve Wozniak, cofounder of Apple, Emad Mostraque, CEO of Stability AI, Stuart Russell, a pioneer in AI research, and Gary Marcus, founder of Geometric Intelligence.

Microsoft has unveiled Security Copilot, an AI-powered analysis tool that aims to simplify, augment and accelerate security operations professionals' work. Security Copilot takes the form of a prompt bar through which security operation center analysts ask questions in natural language and receive practical responses.

Microsoft on Tuesday unveiled Security Copilot in preview, marking its continued push to embed AI-oriented features in an attempt to offer "End-to-end defense at machine speed and scale."Powered by OpenAI's GPT-4 generative AI and its own security-specific model, it's billed as a security analysis tool that enables cybersecurity analysts to quickly respond to threats, process signals, and assess risk exposure.

Despite the opposition of 38 civil society groups, the French National Assembly has approved the use of algorithmic video surveillance during the 2024 Paris Olympics. On Thursday, the French National Assembly adopted Article 7 of the pending bill, which authorizes automated analysis of surveillance video from fixed and drone cameras.

Russian president Vladimir Putin and his Chinese counterpart Xi Jinping have set themselves the goal of dominating the world of information technology. The rest of the world may never recognize it, as appetite to acquire Russian and Chinese tech outside the two nations and their small circle of allies is not vast.

In this Help Net Security video, Liudas Kanapienis, CEO of Ondato, discusses the impact of AI on the future of ID verification and how it is transforming the way identities are being verified. The...

Security experts are increasingly resorting to unauthorized AI tools, possibly because they are unhappy with the level of automation implemented in their organization's security operation centers, according to a study conducted by Wakefield Research. Security pros are using AI tools without authorization.

Microsoft has announced a new assistant powered by artificial intelligence to help boost productivity across Microsoft 365 apps, currently being tested by select commercial customers. Known as Copilot, the new AI feature helps create and manage documents, presentations, and spreadsheets, as well as triage and reply to emails.