Security News

Abstract: In this paper we describe how to plant novel types of backdoors in any facial recognition model based on the popular architecture of deep Siamese neural networks, by mathematically changing a small fraction of its weights. These backdoors force the system to err only on specific persons which are preselected by the attacker.

Teams of hackers defend their own computers while attacking other teams'. It's a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others'.

OpenAI has released an AI text classifier that attempts to detect whether input content was generated using artificial intelligence tools like ChatGPT. "The AI Text Classifier is a fine-tuned GPT model that predicts how likely it is that a piece of text was generated by AI from a variety of sources, such as ChatGPT," explains a new OpenAI blog post. OpenAI released the tool today after numerous universities and K-12 school districts banned the company's popular ChatGPT AI chatbot due to its ability to complete students' homework, such as writing book reports and essays, and even finishing programming assignments.

The study finds a significant disconnect between data privacy measures by companies and what consumers expect from organizations, especially when it relates to how organizations apply and use artificial intelligence. The survey showed 60 percent of consumers are concerned about how organizations apply and use AI today, and 65 percent already have lost trust in organizations over their AI practices.

Rather than flooding legislators' inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an AI system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage. This ability to understand and target actors within a network would create a tool for AI hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope.

Microsoft researchers are working on a text-to-speech model that can mimic a person's voice - complete with emotion and intonation - after a mere three seconds of training.Some require clean voice data from a recording studio to capture high-quality speech.

GPT-3 language models are being abused to do much more than write college essays, according to WithSecure researchers. Perhaps unsurprisingly, GPT-3 proved to be helpful at crafting a convincing email thread to use in a phishing campaign and social media posts, complete with hashtags, to harass a made-up CEO of a robotics company.

Researchers at the universities of California, Virginia, and Microsoft have devised a new poisoning attack that could trick AI-based coding assistants into suggesting dangerous code. Given the rise of coding assistants like GitHub's Copilot and OpenAI's ChatGPT, finding a covert way to stealthily plant malicious code in the training set of AI models could have widespread consequences, potentially leading to large-scale supply-chain attacks.

Two of the US government's leading security agencies are building a machine learning-based analytics environment to defend against rapidly evolving threats and create more resilient infrastructures for both government entities and private organizations. The Department of Homeland Security - in particular its Science and Technology Directorate research arm - and Cybersecurity and Infrastructure Security Agency picture a multicloud collaborative sandbox that will become a training ground for government boffins to test analytic methods and technologies that rely heavily on artificial intelligence and machine learning techniques.

For even the most skilled hackers, it can take at least an hour to write a script to exploit a software vulnerability and infiltrate their target. Soon, a machine may be able to do it in mere seconds.When OpenAI last week released its ChatGPT tool, allowing users to interact with an artificial intelligence chatbot, computer security researcher Brendan Dolan-Gavitt wondered whether he could instruct it to write malicious code. So, he asked the model to solve a simple capture-the-flag challenge.