Autonomous Vehicles (AVs), including aerial, ground, and sea vehicles, are becoming an integral part of our lives. Currently, most AVs trust sensor data to make navigation and other control decisions. In addition, they trust that the control commands given to actuators are executed faithfully. While trusting sensor and actuator data with minimal validation has proven to be an effective trade-off in current market solutions, it is not a sustainable practice as AVs become more pervasive and computer attacks increase in sophistication.
Many attacks can compromise the safe operation of AVs by manipulating sensor data, spanning a wide range of different sensor types, including GPS, cameras, IMU, RADAR, ultrasonic, and LiDAR. Given false or misleading sensor information, the machine learning algorithms in AVs will produce erroneous decisions that can cause the vehicle to crash, enter the opposing lane of traffic, or take passengers to an undesired destination. Such failures can also be produced by compromising computational components of AVs, such as the planner: AVs are essentially computers with wheels, and as such, are subject to all the usual software vulnerabilities. Finally, other agents in the environment can take adversarial actions to confuse an AV.
This proposed work will develop a framework to robustify the decision-making of autonomous vehicles. In particular, we will build verification tools that can discover attacks that may falsify security assumptions of the platform and use these counter-examples to train new machine learning algorithms that are robust to these attacks.