OnBoard Security InSights

What the Tesla Autopilot Crash Tells Us About the Need for V2V Security

Posted by Jonathan Petit on Apr 6, 2017 3:08:29 PM
Find me on:

In September 2016, Tesla Motors issued an over-the-air software update to make its Autopilot system rely more on radar than cameras. This update was in response to a highly publicized crash in May 2016 in which a 40-year-old man was killed when his Tesla crashed into a turning tractor trailer. Tesla wrote in a blog post that Autopilot didn't detect "the white side of the tractor trailer against a brightly lit sky, so the brake was not applied." Without more information about the accident I can only speculate, but let me try to reflect on the problem and how security plays a role. The cause of the accident was that the camera did not detect the object because of natural/non-malicious blinding. I define blinding as the action of affecting the camera in a way that objects are not detected, either partial or full blinding. So, what does it say about the robustness of the system against blinding attacks? It says that Tesla's Autopilot apparently does not prioritize safety or does not do sensor fusion correctly, if at all.

If comprehensive sensor fusion is performed, then the system should have received conflicting inputs and reacted accordingly. The camera said "no object detected" while the RADAR presumably said "object detected." The sensor fusion’s output should then have been "object detected" if the configuration was setup to prioritize safety over a smooth ride. If the sensor fusion algorithm weighs the camera inputs more than RADAR or was "muting" the RADAR inputs for some other reason, then the sensor fusion output could have been "no object detected." I can easily envision this scenario at night because in a dark environment, the camera is less accurate than in daylight so the automation would rely only on RADAR, ultrasonic sensor or any other sensors that are not too sensitive to light conditions.

It is clear that the Tesla Autopilot accident was not due to a malicious attack, but it does illustrate how an attack on automated vehicles could occur. An attacker would be able to learn the environmental conditions that affect the sensor fusion heuristics and then compromise the cyber-physical decisions by jamming or manipulating sensor signals. An example of this attack is described in my research on LiDAR and cameras published last year. To protect against this (or a similar) attack, any automation system should rely on at least three good inputs before initiating a short-term maneuver (e.g. acceleration, change lane). By enforcing multi-dimensional consensus, attacking the system becomes naturally more expensive. When in doubt, the default should be safety.

Now that we are learning (often the hard way) about the capabilities and limits of sensors, we should strive for ways to improve overall reliability for autonomous vehicles. A clearly valuable approach is to take advantage of the rapidly evolving Vehicle-to-Vehicle communication (V2V) capabilities as soon as commercially available. If the Tesla and the tractor trailer were equipped with V2X technology, then Autopilot would have been aware of the truck’s speed, location and level of threat. For example, the V2V communication specifications for the Basic Safety Message (BSM) Part I requires a Vehicle Size field and BSM Part II has an optional field for Trailer Data, so the truck would broadcast this information along with its location. If the truck lacked V2V equipment, the surrounding vehicles could have sent their sensor information to the Tesla, improving its sensor fusion and driving decisions through third party sharing of potential threats.

To make autonomous vehicles more trustworthy, the entire ecosystem will have to focus on security as a design imperative. Moreover, system integrators and software companies working on traditional vehicles must also consider autonomous scenarios when designing their products. Hardware suppliers have the responsibility to assess the safety and security of their components, and OEMs need to derive the higher security and robustness requirements for the system as a whole. And of course, the drivers need to be made aware of the limitations of autonomous and semi-autonomous vehicles, and they need to take some personal responsibility for the use of the sophisticated but still emerging technology that they still call “my car.” Protecting any connected vehicle from remote hackers is a daunting task. For autonomous vehicles, there is the added challenge of protecting the vast amounts of sensor data from malicious attacks. This won’t be easy, but car makers will be more effective in protecting cars if they consider the threat vectors in the early stages of development.

Note: You can view my slides from my plenary talk at the Automated Vehicle Symposium 2016 here.

Originally posted on September 29, 2016 at blog.securityinnovation.com

Topics: Automotive, Internet of Things, V2X, Embedded Security, Autonomous Vehicles