Security News
We and our store and/or access information on a device, such as cookies and process personal data, such as unique identifiers and standard information sent by a device for personalised ads and content, ad and content measurement, and audience insights, as well as to develop and improve products. With your permission we and our partners may use precise geolocation data and identification through device scanning.
As progress in AI continues to advance, it is important to know how advanced systems will make choices and in what ways they may fail. Machines can already outsmart humans in some domains, and understanding how to safely build ones which may have capabilities at or above the human level is of particular concern.
Researchers at the University of Surrey have developed software that can assess the amount of data that an artificial intelligence system has acquired from a digital database of an organization, in response to the increasing global interest in generative AI systems. This verification software can be used as part of a company's online security protocol, helping an organisation understand whether an AI has learned too much or even accessed sensitive data.
Many sectors view AI and machine learning with mixed emotions, but for the cybersecurity industry, they present a double-edged sword. On the one hand, AI provides powerful tools for cybersecurity professionals, such as automated security processing and threat detection.
Over a thousand people, including professors and AI developers, have co-signed an open letter to all artificial intelligence labs, calling them to pause the development and training of AI systems more powerful than GPT-4 for at least six months. The letter is signed by those in the field of AI development and technology, including Elon Musk, co-founder of OpenAI, Yoshua Bengio, a prominent AI professor and founder of Mila, Steve Wozniak, cofounder of Apple, Emad Mostraque, CEO of Stability AI, Stuart Russell, a pioneer in AI research, and Gary Marcus, founder of Geometric Intelligence.
Microsoft has unveiled Security Copilot, an AI-powered analysis tool that aims to simplify, augment and accelerate security operations professionals' work. Security Copilot takes the form of a prompt bar through which security operation center analysts ask questions in natural language and receive practical responses.
Microsoft on Tuesday unveiled Security Copilot in preview, marking its continued push to embed AI-oriented features in an attempt to offer "End-to-end defense at machine speed and scale."Powered by OpenAI's GPT-4 generative AI and its own security-specific model, it's billed as a security analysis tool that enables cybersecurity analysts to quickly respond to threats, process signals, and assess risk exposure.
Despite the opposition of 38 civil society groups, the French National Assembly has approved the use of algorithmic video surveillance during the 2024 Paris Olympics. On Thursday, the French National Assembly adopted Article 7 of the pending bill, which authorizes automated analysis of surveillance video from fixed and drone cameras.
Russian president Vladimir Putin and his Chinese counterpart Xi Jinping have set themselves the goal of dominating the world of information technology. The rest of the world may never recognize it, as appetite to acquire Russian and Chinese tech outside the two nations and their small circle of allies is not vast.
In this Help Net Security video, Liudas Kanapienis, CEO of Ondato, discusses the impact of AI on the future of ID verification and how it is transforming the way identities are being verified. The...