Skip to main content

Optimisation for Game Theory and Machine Learning

1st February 2024 to 31st January 2027

Algorithms for optimisation often in practice use local search approaches. For example, when the objective function is continuous and smooth, gradient descent is usually used (for example, in neural networks). In game-theoretic settings, local search arises naturally in the context of multiple agents, who are attempting to improve their payoffs by best-responding to their peers’ behaviour. Of course, there is no general guarantee about the convergence of this process, it depends on the structure of the game. Local search often works surprising well in practice (even when worst-case is known to be poor) and it is of interest to understand why. This project aims to develop the theory of what is going on, and hopefully lead to improved algorithms. It will consider various specific optimisation problems in more detail, as part of this agenda. 

Both machine learning and game theory have given rise to diverse problems of local optimisation, and it is of interest to classify them according to their computational hardness. The project aims to study a general issue, which is the “hard in theory, easy in practice” phenomenon of these problems (so, an aspect of the “beyond worst-case analysis” agenda). The project will include the designing of novel algorithms with performance guarantees. In settings of continuous optimisation, where gradient descent is applicable, there are new and interesting variants of the technique, for example ‘optimistic’ gradient descent, and the issue of how to adjust the step size, or learning rate, as the algorithm runs. 

Principal Investigator

Share this: