Skip to main content

Training neural models using logic: results, challenges, and applications

Efthymia Tsamoura ( Samsung AI, Cambridge, UK )

Neurosymbolic learning (NSL) vows to transform AI by combining the strong induction capabilities of neural models with the strong deduction capabilities of symbolic knowledge representation and reasoning techniques. This talk centers around an NSL problem that has received significant attention lately: training neural classifiers using supervision produced by logical theories. Empirical research has shown the advantages of this learning setting over end-to-end deep neural architectures in multiple aspects, including accuracy and model complexity. Despite the extensive empirical research, limited theoretical analysis has been dedicated to understanding if and under which conditions we can learn the underlying neural models. 

This talk covers this gap by proposing necessary and sufficient conditions, which ensure that we can learn the underlying models under rigorous guarantees. I will also discuss the relationship between this problem and other known problems in the machine learning literature. Furthermore, I will present new challenges inherent to this NSL setting and propose solutions to overcome those challenges, leading to models with substantially higher accuracy. I will conclude this talk with recent applied results and open challenges.

It will be possible to join the seminar remotely via MS Teams using this link: https://teams.microsoft.com/l/meetup-join/19%3ameeting_YmU3M2E1NzItNzVhZS00ZjdhLTk4Y2MtYzlhMWQ0Y2YxMTVj%40thread.v2/0?context=%7b%22Tid%22%3a%22cc95de1b-97f5-4f93-b4ba-fe68b852cf91%22%2c%22Oid%22%3a%22372da332-6dbb-4cd5-8482-f50f118abb41%22%7d

The slides from the seminar are available in the MS Teams chat, as well as here.

Speaker bio

Efi Tsamoura is a Senior Researcher at Samsung AI, Cambridge, UK. In 2016, she was awarded a prestigious early career fellowship from the Alan Turing Institute, UK, and before that, she was a Postdoctoral Researcher in the Department of Computer Science of the University of Oxford. Her main research interests lie in the areas of logic, knowledge representation and reasoning, and neurosymbolic learning, while her recent outcomes involve scaling symbolic reasoning to billions of triples, as well as addressing open problems in neurosymbolic learning.

 

 

Share this: