Skip to main content

Uncertainty in Deep Learning:  2024-2025

Lecturer

Degrees

Schedule C1 (CS&P)Computer Science and Philosophy

Schedule C1Computer Science

Schedule C1Mathematics and Computer Science

Hilary TermMSc in Advanced Computer Science

Term

Overview

This is an advanced course in machine learning, focusing on recent advances in deep learning specifically such as Bayesian neural networks. The course will concentrate on underlying fundamental methodology as well as on applications, such as in autonomous driving, astronomy, and medical applications. Recent statistical techniques based on neural networks have achieved remarkable progress in these domains, leading to a great deal of commercial and academic interest. The course will introduce the mathematical definitions of the relevant machine learning models and derive their associated approximate inference algorithms, demonstrating the models in the various domains. The taught material and assessment include both theoretical derivations as well as applied implementations, and students are expected to be proficient with both.

Learning outcomes

After studying this course, students will:

● Understand the definition of a range of deep learning models.

● Be able to derive and implement approximate inference algorithms for these models.

● Be able to implement and evaluate common neural network models for vision applications.

● Have a good understanding of the two numerical approaches to learning (stochastic optimization and MC integration) and how they relate to the Bayesian approach.

● Have an understanding of how to choose a model to describe a particular type of data.

● Know how to evaluate a learned model in practice.

● Understand the mathematics necessary for constructing novel machine learning solutions.

● Be able to design and implement various machine learning algorithms in a range of real-world applications.

Prerequisites

Required background knowledge includes probability theory, linear algebra, continuous mathematics, multivariate calculus and multivariate probability theory, as well as good programming skills in Python. Students are required to have taken the Machine Learning
course. The programming environment used in the lecture examples and practicals will be Python/PyTorch.

Undergraduate students are required to have taken the following courses:
● Machine Learning
● Continuous Mathematics
● Linear Algebra
● Probability
● Design and Analysis of Algorithms

Synopsis

  1. Introduction
    a. ML applied in the real world, and when we cannot trust it
    b. Sources of error and uncertainty
    c. What is “Bayesian Deep Learning”
  2. Bayesian Probability Theory: the Language of Uncertainty
    a. Betting games
    b. ‘Belief’ as willingness to wager, and rational beliefs
    c. Deriving the laws of probability from rational beliefs, and interpretations of
    probability
  3. Bayesian Probabilistic Modelling (an Introduction)
    a. From beliefs to ML
    b. On ML and ‘assumptions’
    c. Generative story and probabilistic model
    d. Intuition: what does the likelihood really mean? (likelihood as a function of
    parameters)
    e. Intuition: The Posterior
  4. Bayesian Probabilistic Modelling of Functions
    a. Why uncertainty over functions
    b. Linear regression
    c. Linear basis function regression
    d. Parametrised basis functions
    e. Hierarchy of parametrised basis functions (aka neural networks)
    f. NNs through the probabilistic lens (generative story, prob model, inference,
    predictions)
  5. Uncertainty over Functions
    a. Our model
    b. Decomposing uncertainty
    c. Aleatoric uncertainty
    d. Epistemic uncertainty
  6. Approximate Inference
    a. Approximating the posterior with variational inference
    b. Underlying principle of variational inference
    c. Kullback Leibler properties and KL with continuous random variables
    d. KL for approximate inference
    e. KL for approximate inference in regression: example
    f. Competing tensions in objective
    g. Uncertainty in regression with approximate posterior
  7. Some mathematical tools
    a. MC integration
    b. Integral derivatives
    c. Example of integral derivative estimation
    d. The reparametrisation trick
  8. Stochastic Approximate Inference in Deep NN
    a. Using the reparam trick in a shallow regression model
    b. Stochastic VI in deep models
    c. Bayesian neural networks
  9. Inference in Real-world Deep Models 1
    a. Issues with what we’ve discussed so far
    b. Stochastic regularisation and its implied objective
    c. Feature space noise to weight space
  10. Inference in Real-world Deep Models 2
    a. Stochastic regularisation as approximate inference
    b. Dropout uncertainty example (understanding the thing we saw in the first
    lecture)
    c. Epistemic uncertainty in regression BNNs
    d. How to visualise BNNs?
  11. Classification in Bayesian Neural Networks
    a. Classification generative story
    b. Approximate inference in classification NNs
  12. Uncertainty in Classification
    a. Information theoretic quantities
    b. Information theoretic quantities applied to BNNs
    c. Predictive Entropy
    d. What are ambiguous points in classification?
    e. Mutual Information
  13. Real-world Applications of Model Uncertainty: Active Learning
    a. Active learning: ML with small data and large models
    b. Melanoma diagnosis
    c. Acquisition functions and intuition
    d. Example: How to implement active learning with MNIST
    e. Active Learning with Batches of Points
  14. Real-world Applications of Model Uncertainty: Astronomy, Metrics, and Benchmarks
    a. Deep learning with a human in the loop
    b. Galaxy zoo citizen science project
    c. Exoplanet Atmospheric Retrieval
    d. How can we tell if our uncertainty estimates are sensible?
    e. Diabetes retinopathy diagnostics application
  15. Real-world Applications of Model Uncertainty: Autonomous Driving
    a. Bayesian deep learning in Autonomous Driving
    b. Perception: Uncertainty in Computer Vision
    i. Uncertainty in autonomous driving semantic segmentation
    ii. Bayesian decision theory and Loss-Calibrated BNNs
    iii. Uncertainty Metrics in AV Segmentation
    c. Decision making: Planning under uncertainty
    i. Can Autonomous Vehicles Identify, Recover From, and Adapt to Distribution Shifts?
    d. More applications

Syllabus

Bayesian deep learning. The formal language of uncertainty (Bayesian probability theory). Tools to use this language in ML (Bayesian probabilistic modelling). Techniques to scale to real-world deep learning systems (modern variational inference). Developing big deep learning systems which convey uncertainty. Real-world applications.

Reading list

Kevin P. Murphy. Probabilistic Machine Learning: Advanced Topics. MIT Press (2022)
○ https://probml.github.io/pml-book/book2.html 
● Kevin P. Murphy. Probabilistic Machine Learning: An Introduction. MIT Press (2022)
○ https://probml.github.io/pml-book/book1.html 
● Ian Goodfellow, Yoshua Bengio and Aaron Courville. Deep Learning. MIT Press 2016
○ https://www.deeplearningbook.org/  

Taking our courses

This form is not to be used by students studying for a degree in the Department of Computer Science, or for Visiting Students who are registered for Computer Science courses

Other matriculated University of Oxford students who are interested in taking this, or other, courses in the Department of Computer Science, must complete this online form by 17.00 on Friday of 0th week of term in which the course is taught. Late requests, and requests sent by email, will not be considered. All requests must be approved by the relevant Computer Science departmental committee and can only be submitted using this form.