Autonomous automated vehicles (AV), also known as self-driving cars, have been garnering a lot of press coverage over the past year, as automakers (Audi, Mercedes-Benz, GM, Toyota, etc.), Tier 1 suppliers (Delphi, Bosch, etc.), Universities (Oxford, Stanford, Parma, etc.) and technology companies (Google, Apple, etc.) have all made steps toward releasing autonomous cars in the not-too-distant future.
There are many benefits of AV’s, including improved safety, reduced traffic and emissions, and an enhanced driving experience. To function, AV’s require multiple sensors (LiDAR, radar, camera, etc.) working in concert to make short-term (i.e. safety-related) and long-term (i.e. planning) driving decisions. But beyond just being accurate, reliable and trustworthy in a variety of difficult environments and driving situations, sensors also must fend off intentional or unintentional attacks that could disrupt the automation system.
Recently, my colleagues and I discovered a remote attack on camera-based systems and LiDAR using commercially-available lasers with some minor modifications. We showed using laboratory experiments that we could attack the sensors through blinding, jamming, replay, relay, and spoofing attacks.
I’ll give a bit more detail on the actual attacks we conducted. All AV’s use Light Imaging Detection and Ranging (LiDAR) to detect objects, including an obstacle on the road. Any attack that “fools” sensors into thinking an object is in its path can cause a fake warning or trigger an emergency brake. One possible attack we conducted was to relay the original signal sent from the target vehicle LiDAR from another position to create fake echoes, and eventually, make real objects appear closer or further than their actual locations. We then extended the attack to create fake objects. We captured the LiDAR signal and duplicated it, to actively spoof the LiDAR with the intention to re(p)lay objects and control their position.
We also conducted a number of attacks on cameras, which are used to detect traffic signs and other objects. Unfortunately, we showed that these cameras can be blinded by emitting light into the camera, which overexposes the image and hides the object from the autonomous system. We demonstrated another attack by hitting the camera with bursts of light that confuse the camera controls and in some cases, the camera never recovers.
But we didn’t just discover the vulnerability and then toss the problem to the automakers and their suppliers. Instead, we proposed software and hardware countermeasures to improve sensors resilience against these attacks. At OnBoard Security, our goal is to make both autonomous and connected vehicles as resistant to cyber-attacks as possible. The hope is that research like ours, will identify these potential attacks for automakers so they can make more robust systems and avoid potentially life-threatening situations for their customers.
Jonathan Petit spoke at Black Hat Europe 2015 on his research involving self-driving and connected cars. View his presentation here
Originally posted on November 4, 2015 at blog.securityinnovation.com