Back

December 11, 2017

Robust Physical-World Attacks on Machine Learning Modules

Could graffiti convey a hidden message to your car? Or cause a robot to do something unexpected? Cars and robots, as well as other devices, are more frequently relying on images of their surroundings to make decisions. New research explores the possibility that malicious alterations to real world objects, like the road sign above, could cause these devices to “misread” the image and take a certain adverse action. The paper Robust Physical-World Attacks on Deep Learning Modules is by a research team spanning the University of Washington, including Ph.D. student Ivan Etimov, Tech Policy Lab postdoc Earlence Fernandes, and Co-Director Yoshi Kohno, along with Kevin Eykholt and Atul Prakash from the University of Michigan, Amir Rahmati from Stony Brook University, and from the University of California Berkeley Bo Li and Dawn Song.

To address this question, the researchers created an algorithm that could generate these alterations, a methodology to evaluate their effectiveness in fooling machine learning, and then applied both to the real world example of autonomous vehicles. They experimented to see whether physically altering an object, in this case a road sign, could cause the computer of an autonomous vehicle to classify it incorrectly. Autonomous vehicles learn to classify objects using machine learning, where the car’s computer “learns” what objects such as road signs, pedestrians, and other cars look like by being shown thousands of photos of each object. If you’re not familiar with machine learning, check out the Lab’s fun primer video “What is Machine Learning?” here. Current self driving car systems can include this type of camera sensor, as well as a variety of others such as lidar, radar, and GPS.

The researchers wanted to explore whether it’s possible to fool these machine learning “brains” by slightly altering images shown to the classifier, which identifies, in the case of autonomous vehicles, the different road signs seen by the car’s camera sensors. While previous research has focused on altering an image digitally and then feeding that digital image into a classifier, the research team wanted to see if it was possible to physically, rather than digitally, alter the content of the image to maliciously fool the classifier.

In order to generate ways to alter these road signs, the researchers applied their new algorithm that looked at what the trained classifier “knew” about road signs, and generated ways to alter the signs that would fool the classifier when used in the real world. The research focuses on two types of alterations generated by the algorithm:
• poster-printing attacks, where an attacker prints an actual-sized poster of a road sign that has subtle variations and pastes it over the real sign, and
• sticker attacks, where an attacker prints the generated sticker design and places it onto the existing road sign.

Poster Printing Attack                                                                    Sticker Attack

Following their proposed methodology, the researchers took photos of the signs from a range of physical conditions that mimic different positions under which a sensor might encounter the object, and then fed those images into a machine learning application, in this case a road sign classifier. When photos of the above stop signs taken from different angles and distances were fed into the researchers’ road sign classifier in lab testing, the classifier misread them as speed limit signs 100% of the time for the poster-printing attack, and 66% of the time for the sticker attack. Because these attacks mimic vandalism or street art, it can be difficult for a casual observer to identify the risk they could pose.

The researchers show that it is possible to generate real world alterations to objects that fool machine learning under a variety of conditions. They propose a new methodology for evaluating the effectiveness of these alterations under a range of diverse physical conditions that mimic those a sensor may encounter the object under in the real world. The researchers’ aim is to help improve the security of technology like autonomous vehicles in the future, by identifying security risks now. To read more, see the paper Robust Physical-World Attacks on Deep Learning Models as well as the FAQ .