Verifiable Learning-Enabled Systems using Conformal Prediction
- 16:00 7th March 2024online
Learning-enabled systems promise to enable many future technologies such as autonomous driving, intelligent transportation, and robotics. Accelerated by the computational advances in machine learning and AI, there has been tremendous success in the design of learning-enabled systems. At the same time, however, new fundamental challenges arise regarding the safety and reliability of these increasingly complex systems that operate in unknown dynamic environments. In this talk, I will provide new insights and discuss exciting opportunities to address these challenges by using conformal prediction (CP), a statistical tool for uncertainty quantification. I will advocate for the use of CP in systems theory due to its simplicity, generality, and efficiency as opposed to existing model-based techniques that are either conservative or have scalability issues.
We will start the talk with an intuitive and accessible introduction to CP for the non-expert who is interested in applying CP to address real-world engineering problems. In the first part of the talk, we then use CP to design probabilistically safe motion planning algorithms in dynamic environments. Specifically, we design a model predictive controller that uses (i) learning-enabled trajectory predictors to obtain predictions of the environment, and (ii) conformal prediction regions quantifying the uncertainty of these predictions. While existing data-driven approaches quantify uncertainty heuristically, we quantify uncertainty in a distribution-free manner and obtain probabilistically valid prediction regions. In the second part of the talk, we are interested in predicting failures of learning-enabled systems during their operation. Particularly, we leverage CP and design two predictive runtime verification algorithms (an accurate and an interpretable version) that compute the probability that a high-level system specifications is violated. In both parts of the talk, we will discuss how to use robust and adaptive versions of CP to deal with distribution shifts that arise when the deployed learning-enabled system is different from the system during design time. If time permits, I will talk about our recent work on conformal predictive programming for solving chance constrained optimization problems in the end of the talk.