Security News

Fraudsters are underestimating the power of AI to detect fake IDs, according to a new report from Ondato. Based on an analysis of millions of ID verifications carried out for its customers in 2022, Ondato found that ID cards were used in 52% of fraudulent verification attempts - far ahead of driving licences and passports.

Concentric AI has launched its new channel partner program which is aimed at enabling partners' growth and success delivering the leading solution in the rapidly expanding AI-powered data risk management market to improve customers' security posture. With Concentric AI's partner ecosystem in place, end users are better-positioned to realize the full value of its Semantic Intelligence AI-powered data risk management platform.

Bots like ChatGPT may not be able to pull off the next big Microsoft server worm or Colonial Pipeline ransomware super-infection but they may help criminal gangs and nation-state hackers develop some attacks against IT, according to Rob Joyce, director of the NSA's Cybersecurity Directorate. Joyce, speaking at CrowdStrike's Government Summit Tuesday, said he doesn't expect to see - at least not "In the near term" - AI used "For automated attacks that will rip through systems at speeds that are unfathomable today."

Joe Burton, CEO of digital identity authentication company Telesign, spoke with TechRepublic about how the "Fuzzy" realm between statistical analysis and artificial intelligence can fuel global, fast and accurate identity management. Burton said the company is looking forward, with big plans to use new technologies and services powered by AI to set itself apart from competitors.

We and our store and/or access information on a device, such as cookies and process personal data, such as unique identifiers and standard information sent by a device for personalised ads and content, ad and content measurement, and audience insights, as well as to develop and improve products. With your permission we and our partners may use precise geolocation data and identification through device scanning.

As progress in AI continues to advance, it is important to know how advanced systems will make choices and in what ways they may fail. Machines can already outsmart humans in some domains, and understanding how to safely build ones which may have capabilities at or above the human level is of particular concern.

Researchers at the University of Surrey have developed software that can assess the amount of data that an artificial intelligence system has acquired from a digital database of an organization, in response to the increasing global interest in generative AI systems. This verification software can be used as part of a company's online security protocol, helping an organisation understand whether an AI has learned too much or even accessed sensitive data.

Many sectors view AI and machine learning with mixed emotions, but for the cybersecurity industry, they present a double-edged sword. On the one hand, AI provides powerful tools for cybersecurity professionals, such as automated security processing and threat detection.

Over a thousand people, including professors and AI developers, have co-signed an open letter to all artificial intelligence labs, calling them to pause the development and training of AI systems more powerful than GPT-4 for at least six months. The letter is signed by those in the field of AI development and technology, including Elon Musk, co-founder of OpenAI, Yoshua Bengio, a prominent AI professor and founder of Mila, Steve Wozniak, cofounder of Apple, Emad Mostraque, CEO of Stability AI, Stuart Russell, a pioneer in AI research, and Gary Marcus, founder of Geometric Intelligence.

Microsoft has unveiled Security Copilot, an AI-powered analysis tool that aims to simplify, augment and accelerate security operations professionals' work. Security Copilot takes the form of a prompt bar through which security operation center analysts ask questions in natural language and receive practical responses.