Skip to main content

Students presenting latest research findings in graph neural networks at NeurIPS

Posted:

A group of students from the Department of Computer Science have headed to New Orleans to present their cutting-edge research findings in graph neural networks to the world’s top-tier machine learning conference.  

Sam Adam-Day (a former DPhil student), Radoslav Dimitrov (a former Part B student), Xingyue Huang (a former Part C student), Ralph Abboud (a former DPhil student) and their supervisor Ismail Ilkan Ceylan are in New Orleans this week for the Neural Information Processing Systems conference (NeurIPS).  

Ismail said the conference was a great opportunity for sharing the latest knowledge on graph machine learning.

NeurIPS 2023 is a great opportunity to exchange research advances in machine learning. We are so pleased that students from the Department of Computer Science are able to share with the best and the brightest young minds their latest research findings across three papers. Departmental Lecturer Ismail Ilkan Ceylan
 
   

The papers being presented include:     

Zero-one laws of graph neural networks: https://arxiv.org/abs/2301.13060    

The authors: This work will be presented by Sam Adam-Day who was a Maths DPhil at the time of the project, and is currently postdoc at the Department of CS. Theodor Mihai-Iliant has contributed to this project as a Part C student.    

The research: Graph neural networks (GNNs) are the de facto standard deep learning architectures for machine learning on graphs. This has led to a large body of work analysing the capabilities and limitations of these models, particularly in their representation and extrapolation capacity. This research offers a novel theoretical perspective by answering the question: how do GNNs behave as the number of graph nodes become very large? We show that when we draw graphs of increasing size from the Erdős-Rényi model, the probability that such graphs are mapped to a particular output by a class of GNN classifiers tends to either zero or to one. This class includes the popular graph convolutional network architecture. The result establishes 'zero-one laws' for these GNNs, and analogously to other convergence laws, entails theoretical limitations on their capacity. The researchers empirically verify our results, observing that the theoretical asymptotic limits are evident already on relatively small graphs.  
 
PlanE: Representation learning with planar graphs: https://arxiv.org/abs/2307.01180   

The authors: This work is co-first authored by Radoslav Dimitrov, a former Part B student, and Zeyang Zhao, a former Part C student. Ralph Abboud is another co-author of this paper who is a former DPhil student from the Department supervised by Ismail Ilkan Ceylan.    

The research: Graph neural networks are prominent models for representation learning over graphs. The idea is to iteratively compute representations of nodes of an input graph through a series of transformations in such a way that the learned graph function is isomorphism invariant on graphs, which makes the learned representations graph invariants. It is well-known that graph invariants learned by this class of models are incomplete: there are pairs of non-isomorphic graphs which cannot be distinguished by standard graph neural networks. This is unsurprising given the computational difficulty of graph isomorphism testing on general graphs, but the situation begs to differ for special graph classes, for which efficient graph isomorphism testing algorithms are known, such as planar graphs. The goal of this work is to design architectures for efficiently learning complete invariants of planar graphs. Inspired by the classical planar graph isomorphism algorithm of Hopcroft and Tarjan, the researchers propose PlanE as a framework for planar representation learning. PlanE includes architectures which can learn complete invariants over planar graphs while remaining practically scalable. They empirically validate the strong performance of the resulting model architectures on well-known planar graph benchmarks, achieving multiple state-of-the-art results.   

A Theory of Link Prediction via Relational Weisfeiler-Leman on knowledge graphs: https://arxiv.org/pdf/2302.02209.pdf    

The authors: This work will be presented by a former Part C student: Xingyue Huang. Xingyue is currently a DPhil student at the Department co-supervised by Ismail Ilkan Ceylan and Michael Bronstein. This work is in collaboration with external researchers, namely Prof. Pablo Barceló and Dr. Miguel Romero Orth from Universidad Católica de Chile.    

The research: Graph neural networks are prominent models for representation learning over graph-structured data. While the capabilities and limitations of these models are well-understood for simple graphs, our understanding remains incomplete in the context of knowledge graphs. Their goal is to provide a systematic understanding of the landscape of graph neural networks for knowledge graphs pertaining to the prominent task of link prediction. Their analysis entails a unifying perspective on seemingly unrelated models and unlocks a series of other models. The expressive power of various models is characterised via a corresponding relational Weisfeiler-Leman algorithm. This analysis is extended to provide a precise logical characterization of the class of functions captured by a class of graph neural networks. The theoretical findings presented in this paper explain the benefits of some widely employed practical design choices, which are validated empirically.