Herding Tensor Compilers
- 16:00 15th October 2021 ( week 1, Michaelmas Term 2021 )Zoom
The orchestration of high-performance numerical computations on distributed and heterogeneous systems is not getting any simpler. In the last 5 years, driven by the needs of machine learning, systems and compilers made tremendous progress towards hiding this complexity while delivering excellent performance. These undeniable successes of computing systems and programming language research also came with undesirable and somewhat paradoxical side effects: abstractions and engineering frameworks diversifying out of control while machine learning models got stuck in the rut defined by a small set of highly optimized operators. We will recall algebraic principles supporting the compilation of tensor algebra, and illustrate these principles on three optimization strategies with different degrees of human/expert intervention. While the presentation focuses on optimization and algorithms, we will also discuss MLIR, a large-scale compiler construction effort to rationalize the landscape of machine learning systems.