Skip to main content

Scientific Computing:  2024-2025

Lecturer

Degrees

Schedule B1 (CS&P)Computer Science and Philosophy

Schedule A2Computer Science

Schedule B1Computer Science

Schedule A2(M&CS)Mathematics and Computer Science

Schedule B1(M&CS)Mathematics and Computer Science

Term

Learning outcomes

The overarching aim of this course is to present the theory for a range of techniques used in scientific computing in a manner that allows problems to be solved efficiently and robustly through: (i) the choice of a suitable numerical algorithm for a specific problem; and (ii) an understanding of how to implement the numerical algorithm efficiently.

The first component of the course will present the theory that underpins two commonly used methods for the iterative solution of linear systems, including preconditioning techniques that may be used to accelerate these iterative methods. Aside from black box preconditioners, the potential to exploit the structure of the matrix will also be demonstrated and related to the error analysis for these iterative methods. This material will then be shown to have application when solving constrained minimisation problems using Newton's method. The second component of the course will focus on practical methods for the numerical solution of initial value ordinary differential equations to within a specified tolerance. We will then exploit probabilistic methods to explain how the underlying features of a stochastic simulation of chemical reactions between a very large number of molecules may be described by a tractable system of initial value ordinary differential equations. We conclude with a brief introduction to Bayesian inference, explaining how this may be used as an alternative to classical optimisation techniques when attempting to estimate parameters from experimental data that is subject to experimental error.

Prerequisites

Prelims Linear Algebra (either CS or Maths), Continuous Mathematics and Probability.

Synopsis

Least squares matrix problems.  Least squares problems.  Solution by QR factorisation using the Gram-Schmidt method, the modified Gram-Schmidt method, Householder reflections and Givens rotations.  This material will also be required for the material on GMRES in later lectures.

Matrix norms, and sensitivity of linear systems and least squares matrix problems to errors in the data.  This will be required for error analysis of the conjugate gradient method and GMRES covered in later lectures.

The conjugate gradient method for solving linear systems.  Introduction to iterative methods for solving linear systems, and their use for sparse matrices or when Gaussian elimination is too computationally expensive.  Krylov spaces.  Derivation of the conjugate gradient method for symmetric positive definite matrices, favourable properties, error analysis.  Proof that this method converges quickly when the matrix has a small number of distinct eigenvalues.  Chebyshev polynomials and their use to demonstrate that the conjugate gradient method converges quickly when the eigenvalues are closely clustered.

GMRES for general linear systems.  Derivation of GMRES.  Use of the Arnoldi iteration for solving the least squares problem that arises on each iteration.  Properties and error analysis.  Practical issues for computation such as restart value.

Preconditioning of iterative linear solvers.  Examples of "off-the-shelf" preconditioners such as incomplete factorisations.  Derivation of more specialised preconditioners based on results from the error analysis for the conjugate gradient method and GMRES.  Preconditioners that exploit the structure of constrained linear systems.  Application to Newton's method for constrained optimisation.

Initial value ordinary differential equations.  An overview of initial value ordinary differential equations and simple methods for analytic solutions.  Simple methods for numerical solution including stability and accuracy.  Estimation of the different timescales exhibited by a system of ordinary differential equations, the relation to the concept of stiffness, and the choice of the numerical method for used for solving the differential equations.  Error estimation.

Stochastic simulation of chemical reactions.  Stochastic simulation of chemical reactions modelling each individual molecule.  The increase in computation required as the number of molecules increases.  Rigorous derivation of the chemical master equation for a general chemical reaction,  Gillespie's algorithm. 

Introduction to parameter estimation using Bayesian inference.  Use of simple methods for Bayesian inference when derivatives of the function to be minimised are not easy to obtain, making it diffcult to use Newton's method.  An application will be parameter estimation, where the unknown parameters are coefficients in a system of initial value ordinary differential equations.  This material will not be covered as rigorously as other components in this course.  This material is included to give an introduction of how parameter recovery is typically carried out when the unknown parameters are coefficients in a differential equation model and the simulated data is the solution of the differential equation.

Syllabus

Solution of least squares matrix problems using a QR factorisation based on the Gram-Schmidt method, the modified Gram-Schmidt method, Householder reflections and Givens rotations.  Matrix norms.  The sensitivity of the solution of linear systems and least squares matrix problems to errors in the data.  The conjugate gradient  method and error analysis. GMRES, solution of the least squares problem on each iteration of GMRES, error analysis.  Preconditioning of linear systems.  Constrained minimisation with Newton's method, using an appropriately preconditioned iterative linear solver.  Initial value ordinary differential equations: simple analytical and numerical solution methods; stiffness and its relation to the choice of numerical solution method; error estimation.  Stochastic simulation of chemical reactions and the chemical master equation.  An introduction to Bayesian inference.

Reading list

The material on least squares problems, matrix norms and iterative methods for linear systems is covered in the spirit of the following two books.

  • Matrix Computations, by Gene Golub and Charles van Loan, Johns Hopkins University Press.
  • Numerical Linear Algebra, by Lloyd (Nick) Trefethen and David Bau, Society for Industrial and Applied Mathematics.


There are many numerical analysis textbooks including chapter(s) on numerical methods for initial value ODEs. The following both cover the material at the high level of rigour followed by this course, with the book by Iserles also covering prectical implementation.

  • A First Course in the Numerical Analysis of Differential Equations, by Arieh Iserles, Cambridge University Press.
  • An Introduction to Numerical Analysis, by Endre Süli and David Mayers, Cambridge University Press.


The material on stochastic systems covered is covered in the following book.

  • Stochastic Modelling of Reaction-Diffusion Processes, by Radek Erban and Jonathan Chapman, Cambridge University Press.


We provide only a brief introduction to Bayesian inference. More detail is provided in the following book.

  • Markov Chain Monte Carlo in Practice, by W.R. Gilks, S. Richardson and J.J. Spiegelhalter, Chapman & Hall.

Taking our courses

This form is not to be used by students studying for a degree in the Department of Computer Science, or for Visiting Students who are registered for Computer Science courses

Other matriculated University of Oxford students who are interested in taking this, or other, courses in the Department of Computer Science, must complete this online form by 17.00 on Friday of 0th week of term in which the course is taught. Late requests, and requests sent by email, will not be considered. All requests must be approved by the relevant Computer Science departmental committee and can only be submitted using this form.