Quaternion Neural Networks
- 11:00 9th July 2019Lecture Theatre B
Quaternion neural networks
Real-world data used to train modern artificial neural networks reflect the complexity of the environment that we are evolving in. As a consequence, they are neither flat, nor decorrelated nor one dimensional. Instead, scientists have to deal with composed and multidimensional entities, that are characterized by multiple related components, such as color channels describing a single color pixel of an image, or the 3D coordinates of a point denoting the position of a robot for example. Surprisingly, recent advances on deep learning are mainly focused on developing novel architectures to extract even more relevant and robust high level representations from the input features, while the latter are still poorly considered at a lower and basic level, by being processed by one dimensional real-valued neural models. Neural networks based on complex and quaternion numbers have been used sparsely for many decades. Nonetheless, due to new statements and proofs about the benefits of these models over real-valued ones on many real-world tasks, quaternion based neural networks have been increasingly employed, and novel quaternion based architectures have been proposed. This talk will detail quaternion neural networks architectures for artificial intelligence related tasks, such as image processing, or speech recognition, by introducing first the basics of quaternion numbers, and then describing recent advances on quaternion neural networks with the quaternion convolutional (Interspeech 2018, ICASSP 2019) and recurrent neural networks (ICLR 2019). This presentation will also show their benefits in terms of performances obtained in different tasks, as well as in terms of neural parameters required for learning. Finally, the talk will outline important future research directions to turn quaternion neural networks into a mandatory alternative to real-valued models for real-world tasks.
Bio
Titouan Parcollet is a PhD student at the Laboratoire Informatique d’Avignon of the University of Avignon (France), under the co-supervision of Georges Linarès and Mohamed Morchid. He is also a research engineer at Orkis, Aix-en-Provence (France). He has recently been a visiting scientist at MILA (working with Yoshua Bengio, Mirco Ravanelli and Chiheb Trabelsi). His expertise is in quaternion-valued neural networks, while his research interests broadly involve machine learning for better representation learning, speech, image and language processing, and applications for social good. His works have been presented in different major machine learning related conferences (ICLR, NeurIPS), and NLP venues (INTERSPEECH, ICASSP, SLT, ASRU, CORIA). He is also co-leading the recent Pytorch-Kaldi toolkit in partnership with MILA, for a unified speech processing toolkit.