Guaranteed Non-convex Machine Learning Algorithms through Spectral Methods
- 14:00 2nd December 2016 ( week 8, Michaelmas Term 2016 )Tony Hoare Room, Robert Hooke Building
Most learning problems can be cast as optimization tasks which are non-convex. Developing fast and guaranteed approaches for solving non-convex problems is a grand challenge. I will show how spectral optimization can reach the globally optimal solution for many learning problems despite being non-convex. This includes unsupervised learning of latent variable models, training neural networks and reinforcement learning of partially observable Markov decision processes. It involves spectral decomposition of moment matrices and tensors. Tensors are rich structures that can encode higher order relationships in data. In practice, tensor methods yield enormous gains both in running times and learning accuracy over traditional methods such as variational inference. Overall, these positive results demonstrate that many challenging learning tasks can be solved with guarantees using efficient non-convex approaches. I will end the talk by mentioning my ongoing work at Amazon on large-scale learning.