Can autonomous vehicles be protected from complex attacks?

Can autonomous vehicles be protected from complex attacks?

1. März 2021 0 Von Horst Buchwald

Can autonomous vehicles be protected from complex attacks?

Brussels, 3/1/2021

Autonomous vehicles are „highly vulnerable“ to attacks, including those carried out by adversarial machine learning systems, according to a new report from the EU’s cybersecurity agency. The EU agency specifically mentions how an attacker could manipulate image recognition in AVs to make them mislabel pedestrians and hit them in the crosswalk, for example.
This type of attack would also fool motion planning and decision-making algorithms, as well as other systems. The study also mentions „malicious back-end activity“ and sensor attacks using „light beams.“

https://www.enisa.europa.eu/news/enisa-news/cybersecurity-challenges-in-the-uptake-of-artificial-intelligence-in-autonomous-driving#:~:text=A%20new%20report%20by%20ENISA,State%2C%20so%20do%20its%20vulnerabilities.

For example, a widely cited 2017 study showed that researchers could get an AV to misidentify a stop sign as a speed limit sign by placing a sticker over it. Tencent researchers conducted a similar study in 2019, using stickers to trick Tesla’s Autopilot system into taking the wrong lane.
As a result, the EU agency says automakers should move to develop machine-learning systems to counter attacks and minimize safety risks. It also urges companies and policymakers to promote a „culture of security“ in the auto supply chain.
She concludes, „AI systems should be designed, implemented, and deployed by teams that include the automotive expert, the ML expert, and the cybersecurity expert.“