e−SNLI: Natural Language Inference with Natural Language Explanations
Oana−Maria Camburu‚ Tim Rocktäschel‚ Thomas Lukasiewicz and Phil Blunsom
Abstract
In order for machine learning to garner widespread public adoption, models must be able to provide interpretable and robust explanations for their decisions, as well as learn from natural language explanations. In this work, we extend the Stanford Natural Language Inference dataset with an additional layer of human-annotated free-form explanations of the entailment relations. We further implement models that incorporate these explanations into their training process and output them at test time. We show that our corpus of explanations can be used for various goals, such as obtaining full sentence justifications of a model’s decisions and providing consistent improvements on a range of tasks compared to universal sentence representations learned without explanations. Our dataset opens up a range of research directions for using natural language explanations, both for improving models and for asserting their trust.