Security News > 2021 > June > Poltergeist attack could leave autonomous vehicles blind to obstacles – or haunt them with new ones

Poltergeist attack could leave autonomous vehicles blind to obstacles – or haunt them with new ones
2021-06-18 10:01

Researchers at the Ubiquitous System Security Lab of Zhejiang University and the University of Michigan's Security and Privacy Research Group say they've found a way to blind autonomous vehicles to obstacles using simple audio signals.

To try to prove their point, the team came up with Poltergeist: an attack against camera-based computer-vision systems, as found in autonomous vehicles, which uses audio to trigger the image stabilisation functions of the camera sensor and blur the image - tricking the machine learning system into ignoring obstacles in its way.

"The blur caused by unnecessary motion compensation can change the outline, the size, and even the colour of an existing object or an image region without any objects," the team found, "Which may lead to hiding, altering an existing object, or creating a non-existing object." The team categorised these in turn as Hiding Attacks, Creating Attacks, and Altering Attacks.

To prove the concept out of the lab, a Samsung S20 smartphone was attached to a moving vehicle and an actual attack carried out.

The team stopped short of actively attacking a real-world autonomous vehicle.

"While it's clear that there exist pathways to cause computer vision systems to fail with acoustic injection," the researchers concluded, "It's not clear what products today are at risk. Rather than focus on today's nascent autonomous vehicle technology, we model the limits in simulation to understand how to better prevent future yet unimagined autonomous vehicles from being susceptible to acoustic attacks on image stabilisation systems."


News URL

https://go.theregister.com/feed/www.theregister.com/2021/06/18/poltergeist_autonomous_vehicles/