Security News > 2023 > January > Trojan Puzzle attack trains AI assistants into suggesting malicious code
Researchers at the universities of California, Virginia, and Microsoft have devised a new poisoning attack that could trick AI-based coding assistants into suggesting dangerous code.
Given the rise of coding assistants like GitHub's Copilot and OpenAI's ChatGPT, finding a covert way to stealthily plant malicious code in the training set of AI models could have widespread consequences, potentially leading to large-scale supply-chain attacks.
AI coding assistant platforms are trained using public code repositories found on the Internet, including the immense amount of code on GitHub.
Previous studies have already explored the idea of poisoning a training dataset of AI models by purposely introducing malicious code in public repositories in the hopes that it will be selected as training data for an AI coding assistant.
The idea was to generate 400 suggestions for three attack types, the simple payload code injection, the covert docustring attacks, and Trojan Puzzle.
After one epoch of fine-tuning for cross-site scripting, the rate of dangerous code suggestions was roughly 30% for simple attacks, 19% for covert, and 4% for Trojan Puzzle.