Security News

Microsoft has officially begun killing off Cortana as the company moves its focus towards integrating ChatGPT and AI into Windows 11. [...]

ChatGPT was released just nine months ago, and we are still learning how it will affect our daily lives, our careers, and even our systems of self-governance. But when it comes to how AI may...

Cybersecurity risk is distinct from other IT risk in that it has a thinking, adaptive, human opponent. IT generally must deal with first order chaos and risk much like hurricanes in meteorology or...

One of these standards is a generative AI content certification known as C2PA. C2PA has been around for two years, but it's gained attention recently as generative AI becomes more common. The C2PA specification is an open source internet protocol that outlines how to add provenance statements, also known as assertions, to a piece of content.

If you ask Alexa, Amazon’s voice assistant AI system, whether Amazon is a monopoly, it responds by saying it doesn’t know. It doesn’t take much to make it lambaste the other tech giants, but it’s...

Recently, Google unveiled the creation of a dedicated AI red team. The AI red team closely observes both new adversarial research that is being published, as well as where Google is integrating AI into products.

AI professionals are still facing some very real challenges in democratizing data, much less AI (much less Generative AI), across their organizations, according to Dataiku. While the global survey...

In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google's AI experiment, Bard. Both AI-powered bots are the work of the same individual, who appears to be deep in the game of providing chatbots trained specifically for malicious purposes ranging from phishing and social engineering, to exploiting vulnerabilities and creating malware.

The Washington Post is reporting on a hack to fool automatic resume sorting programs: putting text in a white font. The idea is that the programs rely primarily on simple pattern matching, and the trick is to copy a list of relevant keywords-or the published job description-into the resume in a white font.

With the proliferation of generative AI in the business world today, it's critical that organizations understand where AI applications are drawing their data from and who has access to it. I spoke with Moe Tanabian, chief product officer at industrial software company Cognite and former Microsoft Azure global vice president, about acquiring trustworthy data, AI hallucinations and the future of AI. The following is a transcript of my interview with Tanabian.