Skip to main content

Evaluating Language Models for Mathematics through Interactions

Katherine M. Collins‚ Albert Jiang‚ Simon Frieder‚ Lionel Wong‚ Miri Zilka‚ Umang Bhatt‚ Thomas Lukasiewicz‚ Yuhuai Wu‚ Joshua B. Tenenbaum‚ William Hart‚ Timothy Gowers‚ Wenda Li‚ Adrian Weller and Mateja Jamnik

Abstract

There is much excitement about the opportunity to harness the power of large language models (LLMs) when building problem-solving assistants. However, the standard methodology of evaluating LLMs relies on static pairs of inputs and outputs; this is insufficient for making an informed decision about which LLMs are best to use in an interactive setting, and how that varies by setting. Static assessment therefore limits how we understand language model capabilities. We introduce CheckMate, an adaptable prototype platform for humans to interact with and evaluate LLMs. We conduct a study with CheckMate to evaluate three language models (InstructGPT, ChatGPT, and GPT-4) as assistants in proving undergraduate-level mathematics, with a mixed cohort of participants from undergraduate students to professors of mathematics. We release the resulting interaction and rating dataset, MathConverse. By analyzing MathConverse, we derive a taxonomy of human query behaviors and uncover that despite a generally positive correlation, there are notable instances of divergence between correctness and perceived helpfulness in LLM generations, among other findings. Further, we garner a more granular understanding of GPT-4 mathematical problem-solving through a series of case studies, contributed by experienced mathematicians. We conclude with actionable takeaways for ML practitioners and mathematicians: models that communicate uncertainty, respond well to user corrections, and can provide a concise rationale for their recommendations, may constitute better assistants. Humans should inspect LLM output carefully given their current shortcomings and potential for surprising fallibility.

Journal
Proceedings of the National Academy of Sciences of the United States of America (PNAS)
Month
June
Number
24
Volume
121
Year
2024