Security News

Generative AI: A benefit and a hazard
2023-02-08 05:00

Generative models will be integrated into the software we use every day. Machine learning models will generate more and more of the content we interact with.

#AI
Microsoft launches new AI chat-powered Bing and Edge browser
2023-02-07 21:37

Microsoft announced on Tuesday a new version of its Bing search engine powered by a next-generation OpenAI language model more powerful than ChatGPT and specially trained for web search. "Today, we're launching Bing and Edge powered by AI copilot and chat, to help people get more from search and the web."

Manipulating Weights in Face-Recognition AI Systems
2023-02-03 12:07

Abstract: In this paper we describe how to plant novel types of backdoors in any facial recognition model based on the popular architecture of deep Siamese neural networks, by mathematically changing a small fraction of its weights. These backdoors force the system to err only on specific persons which are preselected by the attacker.

AIs as Computer Hackers
2023-02-02 11:59

Teams of hackers defend their own computers while attacking other teams'. It's a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others'.

OpenAI releases tool to detect AI-written text
2023-01-31 19:57

OpenAI has released an AI text classifier that attempts to detect whether input content was generated using artificial intelligence tools like ChatGPT. "The AI Text Classifier is a fine-tuned GPT model that predicts how likely it is that a piece of text was generated by AI from a variety of sources, such as ChatGPT," explains a new OpenAI blog post. OpenAI released the tool today after numerous universities and K-12 school districts banned the company's popular ChatGPT AI chatbot due to its ability to complete students' homework, such as writing book reports and essays, and even finishing programming assignments.

#AI
Most consumers would share anonymized personal data to improve AI products
2023-01-25 04:00

The study finds a significant disconnect between data privacy measures by companies and what consumers expect from organizations, especially when it relates to how organizations apply and use artificial intelligence. The survey showed 60 percent of consumers are concerned about how organizations apply and use AI today, and 65 percent already have lost trust in organizations over their AI practices.

AI and Political Lobbying
2023-01-18 12:19

Rather than flooding legislators' inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an AI system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage. This ability to understand and target actors within a network would create a tool for AI hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope.

#AI
VALL-E AI can mimic a person’s voice from a three-second snippet
2023-01-12 08:30

Microsoft researchers are working on a text-to-speech model that can mimic a person's voice - complete with emotion and intonation - after a mere three seconds of training.Some require clean voice data from a recording studio to capture high-quality speech.

#AI
AI-generated phishing emails just got much more convincing
2023-01-11 20:13

GPT-3 language models are being abused to do much more than write college essays, according to WithSecure researchers. Perhaps unsurprisingly, GPT-3 proved to be helpful at crafting a convincing email thread to use in a phishing campaign and social media posts, complete with hashtags, to harass a made-up CEO of a robotics company.

Trojan Puzzle attack trains AI assistants into suggesting malicious code
2023-01-10 20:20

Researchers at the universities of California, Virginia, and Microsoft have devised a new poisoning attack that could trick AI-based coding assistants into suggesting dangerous code. Given the rise of coding assistants like GitHub's Copilot and OpenAI's ChatGPT, finding a covert way to stealthily plant malicious code in the training set of AI models could have widespread consequences, potentially leading to large-scale supply-chain attacks.