On June 14, 2017, the US Senate Committee on Commerce, Science, and Transportation convened a hearing titled "Paving the Way for Self-Driving Vehicles." During the nearly 2.5-hour session, senators and expert witnesses discussed a wide-range of topics regarding autonomous vehicles, including insurance, access for the disabled, impact on safety and drunk driving, etc. The hearing consisted of several polite exchanges of ideas and plans, until Senator Ed Markey pressed the witnesses on their thoughts on mandatory Federal Cyber Security regulations in automotive.
NTRU is a cryptosystem that uses a special type of polynomial ring. The underlying hardness assumption, known as the NTRU assumption, is that an inverse of a short polynomial (polynomial whose coefficients are very short compared to the modulus q) is indistinguishable from a uniformly random polynomial in this ring. This indistinguishability is crucial in designing a cryptosystem.
In July 2016, the Automotive Information Sharing and Analysis Center (Auto-ISAC) released "Automotive Cybersecurity Best Practices" for carmakers and their suppliers. This document expands on their "Framework for Automotive Cybersecurity Best Practices" published in January 2016. This is the first time the automakers have addressed cybersecurity in a formal manner and a strong sign they are treating hacker threats seriously.
OnBoard Security, the embedded security division of Security Innovation, recently commented on the US Department of Transportation’s Notice of Proposed Rulemaking (NPRM) on V2V communications. OnBoard Security strongly supports the establishment of the proposed regulation since the number of lives saved increases dramatically as the number of cars with V2V increases. Widespread penetration of the technology, and the corresponding prevention of deaths, can only be reached in a reasonable time with a mandate.
In September 2016, Tesla Motors issued an over-the-air software update to make its Autopilot system rely more on radar than cameras. This update was in response to a highly publicized crash in May 2016 in which a 40-year-old man was killed when his Tesla crashed into a turning tractor trailer. Tesla wrote in a blog post that Autopilot didn't detect "the white side of the tractor trailer against a brightly lit sky, so the brake was not applied." Without more information about the accident I can only speculate, but let me try to reflect on the problem and how security plays a role. The cause of the accident was that the camera did not detect the object because of natural/non-malicious blinding. I define blinding as the action of affecting the camera in a way that objects are not detected, either partial or full blinding. So, what does it say about the robustness of the system against blinding attacks? It says that Tesla's Autopilot apparently does not prioritize safety or does not do sensor fusion correctly, if at all.
Autonomous automated vehicles (AV), also known as self-driving cars, have been garnering a lot of press coverage over the past year, as automakers (Audi, Mercedes-Benz, GM, Toyota, etc.), Tier 1 suppliers (Delphi, Bosch, etc.), Universities (Oxford, Stanford, Parma, etc.) and technology companies (Google, Apple, etc.) have all made steps toward releasing autonomous cars in the not-too-distant future.
The National Highway Traffic Safety Administration (NHTSA), part of the US Department of Transportation recently issued their much anticipated Federal Automated Vehicles Policy. This 116-page document is guidance, not mandatory rule-making to "guide manufacturers and other entities in the safe design, development, testing, and deployment of HAVs [Highly Automated Vehicles]."
According to consulting firm, Frost and Sullivan, we can expect the number of hackers to grow to more than 150,000 globally by 2018. This fact combined with the fact that in that same time the number of connected vehicles on the road will increase to more than 220 million creates an increased threat for a significant automotive cybersecurity breach.