Security News > 2024 > May > New Attack Against Self-Driving Car AI

The shade of red on a stop sign could look different on each line depending on the time between the diode flash and the line capture.
The result is the camera capturing an image full of lines that don't quite match each other.
Because it's full of lines that don't match, the classifier doesn't recognize the image as a traffic sign.
This meant an unrecognizable image wasn't just a single anomaly among many accurate images, but rather a constant unrecognizable image the classifier couldn't assess, and a serious security concern.
The researchers developed two versions of a stable attack.
GhostStripe2 is targeted and does require access to the vehicle, which could perhaps be covertly done by a hacker while the vehicle is undergoing maintenance.
News URL
https://www.schneier.com/blog/archives/2024/05/new-attack-against-self-driving-car-ai.html
Related news
- CrowdStrike Security Report: Generative AI Powers Social Engineering Attacks (source)
- How New AI Agents Will Transform Credential Stuffing Attacks (source)
- YouTube warns of AI-generated video of its CEO used in phishing attacks (source)
- MINJA sneak attack poisons AI models for other chatbot users (source)
- New ‘Rules File Backdoor’ Attack Lets Hackers Inject Malicious Code via AI Code Editors (source)
- ⚡ THN Weekly Recap: GitHub Supply Chain Attack, AI Malware, BYOVD Tactics, and More (source)
- AI-Powered SaaS Security: Keeping Pace with an Expanding Attack Surface (source)