📄
Abstract - Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning
Large language model (LLM) agents exhibit strong mathematical problem-solving abilities and can even solve International Mathematical Olympiad (IMO) level problems with the assistance of formal proof systems. However, due to weak heuristics for auxiliary constructions, AI for geometry problem solving remains dominated by expert models such as AlphaGeometry 2, which rely heavily on large-scale data synthesis and search for both training and evaluation. In this work, we make the first attempt to build a medalist-level LLM agent for geometry and present InternGeometry. InternGeometry overcomes the heuristic limitations in geometry by iteratively proposing propositions and auxiliary constructions, verifying them with a symbolic engine, and reflecting on the engine's feedback to guide subsequent proposals. A dynamic memory mechanism enables InternGeometry to conduct more than two hundred interactions with the symbolic engine per problem. To further accelerate learning, we introduce Complexity-Boosting Reinforcement Learning (CBRL), which gradually increases the complexity of synthesized problems across training stages. Built on InternThinker-32B, InternGeometry solves 44 of 50 IMO geometry problems (2000-2024), exceeding the average gold medalist score (40.9), using only 13K training examples, just 0.004% of the data used by AlphaGeometry 2, demonstrating the potential of LLM agents on expert-level geometry tasks. InternGeometry can also propose novel auxiliary constructions for IMO problems that do not appear in human solutions. We will release the model, data, and symbolic engine to support future research.
通过复杂度提升强化学习实现奥林匹克级别的几何大语言模型智能体 /
Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning
1️⃣ 一句话总结
这项研究提出了一个名为InternGeometry的AI智能体,它通过让大语言模型与符号引擎反复交互、并从反馈中学习,仅用极少量训练数据就成功解决了大部分国际数学奥林匹克几何难题,其表现甚至超过了金牌选手的平均水平。