Skip to main content

Uncertainty Propagation from Perception to Decision Making

1st October 2021 to 31st March 2025

Deep learning systems are already in wide use in industry in applications such as assisted-driving and medical diagnosis, putting AI safety and robustness to the test on a daily basis. When such AI systems fail, this can lead to fatal results. For AI systems to be deployed in industry successfully and responsibly, it is necessary to develop safe AI tools able to tell when they are about to fail, which are robust in real-world settings. 

Autonomous driving (AD) is notorious for its complex pipelines composed of many components which are dependent on the outputs of preceding components, with perception modules feeding into planning, and from there to control. In such complex pipelines the errors of one component percolate to follow-up components and can get amplified, potentially leading to catastrophic outcomes. However, even when components in a pipeline ‘know’ when they are guessing at random, i.e. they can quantify their uncertainty, current approaches use this information to alert the passenger to take control, effectively halting the pipeline, or fall back to a safety procedure. Follow-up components do not try to make use of the uncertainty of preceding components to improve their output.  

Building on related work in the medical domain, the project will develop tools to propagate uncertainty from perception to decision making in autonomous driving in a way that allows downstream components to change their output depending on the confidence of preceding components. More concretely, it will develop AD stack components that can use uncertainty information in their predictions – either explicitly (by adding auxiliary inputs) or implicitly by feeding in samples of potential outputs from preceding components. The project will then develop a new approach to perform filtering in the entire AD stack – representing beliefs all the way from perception to decision making – and treat the entire stack as a closed loop which acts in the world to reduce the uncertainty of the different components. This exciting approach allows the turning of previously irreducible uncertainty from a static observation into reducible uncertainty: though it might not be possible to disambiguate whether an object is a cyclist or a pedestrian from a single frame, the work of this project will affect the car’s actions, e.g. to collect more frames to be able to reduce this uncertainty. 

Principal Investigator

Share this: