Security News > 2020 > February > Researchers trick autonomous car autopilot with phantom images
Researchers from Ben-Gurion University of the Negev's Cyber Security Research Center have found that they can trick the autopilot on an autonomous car to erroneously apply its brakes in response to "Phantom" images projected on a road or billboard.
In a research paper the researchers demonstrated that autopilots and advanced driving-assistance systems in semi-autonomous or fully autonomous cars register depthless projections of objects as real objects.
In addition to causing the autopilot to apply brakes, the researchers demonstrated they can fool the ADAS into believing phantom traffic signs are real, when projected for 125 milliseconds in advertisements on digital billboards.
In reality, depthless objects projected on a road are considered real even though the depth sensors can differentiate between 2D and 3D. The BGU researchers believe that this is the result of a "Better safe than sorry" policy that causes the car to consider a visual 2D object real.
The researchers are developing a neural network model that analyzes a detected object's context, surface and reflected light, which is capable of detecting phantoms with high accuracy.
News URL
http://feedproxy.google.com/~r/HelpNetSecurity/~3/_yyd4QniJys/