Key Points
1. Mathematical reasoning is essential to assess human intelligence and is a key area of focus for AI to autonomously tackle math challenges, which requires a comprehensive understanding of diverse mathematical facets.
2. Large Language Models (LLMs) have gained prominence for automated resolution of mathematical problems, showing potential for unraveling the intricacies of mathematical problem-solving.
3. Current LLM-oriented research in mathematics presents a complex panorama due to diverse mathematical problem types, varied evaluation metrics, datasets, and settings employed in the assessment of LLM-oriented techniques.
4. Four pivotal dimensions are addressed in the survey: exploration of mathematical problems and associated datasets, analysis of LLM-oriented techniques, factors affecting LLMs in solving math, and discussion on persisting challenges within the domain.
5. Comprehensive overlaps with previous literature exist, but this survey distinguishes itself by concentrating on LLMs, providing a more in-depth analysis of their advancements, engaging in a thorough discussion of challenges inherent in this trajectory, and extending scrutiny to encompass the perspective of mathematics pedagogy.
6. The survey provides an overview of prominent mathematical problem types and associated datasets, such as Arithmetic, Math Word Problems, Geometry, Automated Theorem Proving, and Math in Vision Context.
7. Various methods for enhancing LLM performance, such as prompting frozen LLMs, strategies enhancing frozen LLMs, and fine-tuning LLMs, are summarized into three progressive levels.
8. The survey evaluates the robustness of LLMs in math-solving, tokenization's critical role in LLMs' arithmetic performance, and LLM's brittleness in mathematical reasoning across different dimensions.
9. The survey also highlights the advantages and disadvantages of deploying LLMs in math education, emphasizing the need for a human-centric approach and continued learning to enhance LLM capabilities.
Summary
Survey of Large Language Models in Mathematical Reasoning
The paper provides a survey of Large Language Models (LLMs) in the context of mathematical reasoning. It explores various mathematical problem types, associated datasets, LLM techniques for mathematical problem-solving, and the challenges in this domain. The authors emphasize the surge in developing LLMs for automated resolution of mathematical problems, driven by the need to empower machines with a comprehensive understanding of diverse mathematical facets. The authors discuss the challenges in discerning true advancements and obstacles in this evolving field due to the vast and varied landscape of mathematical problem types. The paper highlights the multifaceted nature of mathematical challenges, including arithmetic, math word problems, geometry, automated theorem proving, and math in vision context. The survey also covers the varying techniques used by LLMs in mathematical reasoning, such as chatbot evaluation and the effectiveness of different prompting methods.
The authors address factors affecting LLMs' performance in mathematical problem-solving, such as tokenization, pre-training corpus, prompts, and model scale. Additionally, the paper discusses perspectives on deploying LLMs in math education, including advantages and disadvantages, highlighting potential issues in privacy, data security, and meeting the diverse learning needs of students. The authors stress persisting challenges, including data-driven limitations, brittleness of LLMs in mathematical reasoning, and the need for a human-centric approach in math education. The overall aim of the paper is to provide a comprehensive understanding of the current state and challenges in LLM-driven mathematical reasoning, shedding light on potential advancements and practical applications in diverse mathematical contexts.
Reference: https://arxiv.org/abs/2402.00157