Skip to main content

Options in Computer Science

Students studying the Computer Science course will take a total of ten optional courses across their second and third years. This will normally consist of four options in the second year, and six in the third.

Students studying Mathematics and Computer Science will take ten optional courses across their second and third years. This will consist of at least two from Mathematics, and two from Computer Science.

The options that are offered may vary from year to year as the course develops, and according to the interests of teaching staff. The following examples illustrate the kinds of topics that have been offered recently.

Contents

Artificial Intelligence

This is an introductory course into the field of artificial intelligence (AI), with particular focus on search as the fundamental technique for solving AI problems.

Combinatorial Optimisation

This course is an introduction to combinatorial optimisation (with focus on algorithms). It will introduce ideas and techniques underlying many standard algorithms taught in introductory first-year and second-year algorithms courses and go beyond.

Computational Complexity

This course is an introduction to the theory of computational complexity and standard complexity classes. One of the most important insights to have emerged from Theoretical Computer Science is that computational problems can be classified according to how difficult they are to solve. This classification has shown that many computational problems are impossible to solve, and many more are impractical to solve in a reasonable amount of time. To classify problems in this way, one needs a rigorous model of computation, and a means of comparing problems of different kinds. We introduce these ideas in this course, and show how they can be used.

Computer-Aided Formal Verification

Computer-aided formal verification aims to improve the quality of digital systems by using logical reasoning, supported by software tools, to analyse their designs. The idea is to build a mathematical model of a system and then try to prove properties of it that validate the system's correctness - or at least help discover subtle bugs. The proofs can be millions of lines long, so specially-designed computer algorithms are used to search for and check them.

This course provides a survey of several major software-assisted verification methods, covering both theory and practical applications. The aim is to familiarise students with the mathematical principles behind current verification technologies and give them an appreciation of how these technologies are used in industrial system design today.

Computer Architecture

This course follows on from Digital Systems by showing you how hardware components can be used to design processors that achieve high performance. Central to the course is the design of pipelined architectures that execute multiple instructions simultaneously, detecting and resolving interactions between instructions dynamically. The course covers the design of instruction sets and different styles of processor implementation, followed by a brief survey of parallel machines that achieve still higher performance.

Computer Graphics

This is an introductory course in Computer Graphics, and covers a wide range of the field of interactive computer graphics at all levels of abstraction, and with emphasis on both theory and practise. It follows a standard textbook in the field, with additional material used to keep the course up-to-date. You will cover a range of basic graphics techniques, from the generation of graphics on raster output devices to the types of hardware used to display three-dimensional objects. It will be of benefit to anybody who uses computer graphics to display objects or data.

Computer Networks

This course covers the core theory of Computer Networks in order for students to understand the science underpinning computer communications, such as basic architectural principles of computer networking, including how the Internet works today and applications of theory in current technology. The course will cover the problems of Computer Networks and the standard ways to approach and resolve these problems, including relevant real-world, state-of-the-art examples. The practicals for the course will allow students to apply theory to real-world examples.

Computer Security

Security aspects of computer systems have far-reaching implications in an increasingly networked world. In this course we will cover the principles underlying computer security, the problems that must be solved, and some of the solutions that are used in implementing secure systems. The course includes the use of formal models based on logic to model security problems and to verify proposed solutions.

Concurrency

Computer networks, multiprocessors and parallel algorithms, though radically different, all provide examples of processes acting in parallel to achieve some goal. All benefit from the efficiency of concurrency yet require careful design to ensure that they function correctly. The concurrency course introduces the fundamental concepts of concurrency using the notation of Communicating Sequential Processes. By introducing communication, parallelism, deadlock, live-lock, etc., it shows how CSP represents, and can be used to reason about, concurrent systems. Students are taught how to design interactive processes and how to modularise them using synchronisation. One important feature of the module is its use of both algebraic laws and semantic models to reason about reactive and concurrent designs. Another is its use of FDR to animate designs and verify that they meet their specifications.

Data Visualisation

Well-designed visualisations capitalise on human facilities for processing visual information and thereby improve comprehension, memory, inference, and decision making. In addition, the advent of the so-call “explosion of Big Data” has increased the needs of effective visualisation systems for data analysis and communication. In this course we will study techniques and algorithms for creating effective visualisations based on principles from graphic design, visual art, perceptual psychology, and cognitive science. The course is targeted both towards students interested in using visualisation in their own work, as well as students interested in building better visualisation tools and systems.

Databases

Databases are at the heart of modern commercial application development. Their use extends beyond this to many applications and environments where large amounts of data must be stored for efficient update and retrieval. Their principles and fundamental techniques are being extended today to the Web and to novel kinds of data, like XML. The purpose of this course is to provide you with an introduction to the principles of database systems. You will begin by studying database design, covering the entity relationship model, and will then cover the relational data model, relational algebra and SQL. As a newer model, XML and some XML query languages will be covered. You will take a deeper look at database query languages and their connection with logic and with complexity classes. In the second half, we will discuss some database implementation issues such as storage, indexes, query processing, and query optimization.

Deep Learning in Healthcare

Neuroscience research has inspired deep learning, and with the increasing digitisation of the medical domain, deep learning is set to advance healthcare. Deep learning has the potential to transform healthcare in areas ranging from medical imaging to electronic health records. This course aims to introduce students to recent advances in deep learning in healthcare, and the application of deep learning algorithms to medical data, including practical considerations for adapting models to the diversity of healthcare data. We will first introduce the biological basis for neural networks, the foundations of neural networks, and how the numerical operations are based on concepts from linear algebra, continuous mathematics, and probability, and how they are applied to train supervised models. We will then cover the computational components (e.g. convolutional neural networks) that form the backbone of powerful tools in deep learning, and how these can be deployed in the context of a variety of medical data (e.g. images). We will then cover networks that exploit the sequential (or temporal) structures in the data. Throughout the course, we will cover the practicalities of training neural networks, focusing particularly on applications in the healthcare domain, including discussion of optimisation, scalability, privacy, and fairness.

Geometric Modelling

This is an introductory course in modelling techniques for 3D objects. It covers a wide range of different ways of representing the geometry of real objects, depending on their functionality and application. The emphasis in this course will be on the theory and basic principles of constructing models; hence the practicals will not use CAD-specific software, but rather a programming environment suitable for reasoning about the mathematics of models.

Knowledge Representation & Reasoning

Knowledge Representation and Reasoning (KRR) is the field of Artificial Intelligence concerned with the encoding of knowledge in a machine-understandable way. This knowledge can then be processed automatically by suitable computer programs. KRR is at the core of so-called Semantic Technologies which are being widely deployed in applications.

This course is an up-to-date survey of selected topics in KRR. The course starts with a review of the basics of Propositional and First-Order logic and their use in the context of KRR. The second part of the course describes formal languages for KRR that are based on fragments of first-order logic and surveys the main reasoning techniques typically implemented for those fragments. Finally, the last part of the course surveys the important topic of non-monotonic reasoning. The course discusses also several applications were KRR-based technologies are currently being used.

Lambda Calculus and Types

The lambda calculus is the prototypical functional programming language. Its basic theory is extremely rich, allowing us to explore many important phenomena in practical computer science in an elegant mathematical setting. Topics covered include the equational theory, term rewriting and reduction strategies, combinatory logic, Turing completeness and type systems.

Logic and Proof

Logic plays an important role in many disciplines, including Philosophy and Mathematics, but it is particularly central to Computer Science. This course emphasises the computational aspects of logic, including applications to databases, constraint solving, programming and automated verification, among many others. We also highlight algorithmic problems in logic, such as SAT-solving, model checking and automated theorem proving.

The course relates to a number of third-year and fourth-year options.Propositional and predicate logic are central to Complexity Theory, Knowledge Representation and Reasoning, and Theory of Data and Knowledge Bases. They are also used extensively in Computer-Aided Formal Verification, Probabilistic Model Checking and Software Verification.

Machine Learning

Machine learning techniques enable us to automatically extract features from data so as to solve predictive tasks, such as speech recognition, object recognition, machine translation, question-answering, anomaly detection, medical diagnosis and prognosis, automatic algorithm configuration, personalisation, robot control, time series forecasting, and much more. Learning systems adapt so that they can solve new tasks, related to previously encountered tasks, more efficiently.

This course will introduce the field of machine learning, in particular focusing on the core concepts of supervised and unsupervised learning. In supervised learning we will discuss algorithms which are trained on input data labelled with a desired output, for instance an image of a face and the name of the person whose face it is, and learn a function mapping from the input to the output. Unsupervised learning aims to discover latent structure in an input signal where no output labels are available, an example of which is grouping web-pages based on the topics they discuss. Students will learn the algorithms which underpin many popular machine learning techniques, as well as developing an understanding of the theoretical relationships between these algorithms. The practicals will concern the application of machine learning to a range of real-world problems.

Physics Informed Neural Networks

Mathematical models have been used to investigate physical phenomena for centuries. Modelling is embedded in numerous scientific and non-scientific areas, including the fields of chemistry, physics, economics and social sciences, to name but a few. As the scientific world and ideas expanded, the systems have become more and more complex, moving from simple relationships between user inputs and observed outputs, to idealised differential equations and onto huge systems of highly non-linear partial differential equations. The major aim of this course is to present the concept of physics informed neural network approaches to approximate solutions systems of partial differential equations.

Principles of Programming Languages

This course uses interpreters written in Haskell as a vehicle for exploring various kinds of programming languages.

Probability

The first half of the course takes further the probability theory that was developed in the first year. The aim is to build up a range of techniques that will be useful in dealing with mathematical models involving uncertainty. The second half of the course is concerned with Markov chains in discrete time and Poisson processes in one dimension, both with developing the relevant theory and giving examples of applications.

Quantum Information

The course is designed for computer science undergraduates and focuses on the general theory of quantum information, independently of the physical realizations.

Requirements

Many software and hardware development projects go through a phase called 'Requirements Capture and Analysis' which tries to determine the properties a system should have in order to succeed in the environment in which it will be used. This can be a very difficult task, and typical requirements documents contain errors, some of which are very difficult to detect, as well as very expensive to correct later on. Experience shows that many errors arise from social, political and cultural factors and recent research has focused on the problem of reconciling such factors with traditional concerns about the more technical aspects of system development.

This course takes a unique stance to the discussion of requirements in that it acknowledges the involvement of both the social and technical concerns. The course surveys a wide range of different approaches to the problem of determining requirements and aims to provide students with a set of techniques and skills that may be tailored to address a wide range of requirements problems.

Scientific Computing

The overarching aim of this course is to present the theory for a range of techniques used in scientific computing in a manner that allows problems to be solved efficiently and robustly through: (i) the choice of a suitable numerical algorithm for a specific problem; and (ii) understanding of how to utilise an implementation of the numerical algorithm efficiently.

The first component of the course will present the theory that underpins two commonly used iterative methods for the iterative solution of linear systems, including preconditioning techniques that may be used to accelerate these iterative methods. Aside from black box preconditioners, the potential to exploit the structure of the matrix will be demonstrated and related to the error analysis for iterative methods. This material will then be shown to have application when solving constrained minimisation problems using Newton's method. The second component of the course will focus on practical methods for the numerical solution of initial value ordinary differential equations to within a specified tolerance. We will then exploit probabilistic methods to explain how a stochastic simulation of chemical reactions between a large number of molecules may be reduced to a system of initial value ordinary differential equations. We conclude with a brief introduction to Bayesian inference, explaining how this may be used as an alternative to classical optimisation techniques when attempting to estimate parameters from experimental data that is subject to experimental error.