Skip to yearly menu bar Skip to main content


Awesome Talk
in
Workshop: Scene Representations for Autonomous Driving

Secure and Safe Autonomous Driving in Adversarial Environments

Bo Li


Abstract:

Advances in machine learning have led to the rapid and widespread deployment of ML algorithms in safety-critical applications, such as autonomous driving and medical healthcare. Standard machine learning systems, however, assume that training and test data follow the same, or similar, distributions, without explicitly considering active adversaries manipulating either distribution. For instance, our recent work has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test-time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors during inference through poisoning attacks. In this talk, I will describe different perspectives of security and safety in machine learning, such as robustness, privacy, generalization, and their underlying interconnections. I will focus on a certifiably robust learning approach based on statistical learning with logical reasoning as an example, and then discuss the principles towards designing and developing practical trustworthy machine learning systems with guarantees, by considering these trustworthiness perspectives in a holistic view. I will also introduce our unified platform SefeBench which generates diverse safety-critical autonomous driving scenarios for safety tests for autonomous vehicles.

Chat is not available.