CSE Alumnus Works on Potential Road-Sign Threat to Self-Driving Cars

Sep 5, 2017
CSE alumnus Tadayoshi Kohno is now a professor of computer science at University of Washington.

CSE alumnus Tadayoshi Kohno (Ph.D. ’06), now a professor of computer science and engineering at the University of Washington, is one of the authors behind a controversial new paper on “Robust Physical-World Attacks on Machine Learning Models.” 

TadayoshiKohno.jpg
CSE alumnus Yoshi Kohno

Kohno – an expert in computer security and privacy – and his co-authors from University of Michigan-Ann Arbor, Stony Brook University and UC Berkeley noted that “deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations.” In short, they demonstrated how self-driving cars can be tricked into making dangerous mistakes with slight changes to the signs that may not be obvious to a driver on the road.

As first reported in the online journal Ars Technica, their research found that by altering street signs slightly can make substantial changes in how an artificial intelligence neural network interprets the signs. Made with standard color printing or stickers, the alterations can look like graffiti or just wear-and-tear to the naked eye.

In the most dramatic example reported in the public but not-yet-peer-reviewed paper, Kohno and his colleagues found a Stop sign made to look like it had natural weathering, which was consistently interpreted by a neural network as a 45 miles-per-hour speed limit sign – a misinterpretation that occurred 100% of the time.  Likewise, a Right Turn sign was misclassified as either a Stop or Added Lane sign in 100% of testing conditions.

RoadSigns.jpg
Sample experimental images for the camouflage art sticker experiments at a selection of distances and angles. The placement of black and white stickers produced radically different meanings when analyzed by an AI neural net trained to view road signs under varying conditions.

To analyze how AI would perceive different changes to signs, the co-authors trained their own neural network using a library of sign images because they found “no publicly available classifier for U.S. road signs.” Kohno and colleagues proposed a new attack algorithm called Robust Physical Perturbations (RP2) designed to generate perturbations by taking images under different conditions into account.

In an FAQ posted online in connection with the new paper, the authors noted that their “algorithm produces perturbations that look like graffiti. As graffiti is commonly seen on road signs, it is unlikely that casual observers would suspect that anything is amiss.” They admit that real self-driving cars may not be vulnerable to this type of attack, but “our work does serve to highlight potential issues that future self-driving car algorithms might have to address.”