教模型自我教学:在可学习性边缘的推理 / Teaching Models to Teach Themselves: Reasoning at the Edge of Learnability
1️⃣ 一句话总结
这篇论文提出了一个名为SOAR的自我改进框架,它让一个大语言模型扮演‘老师’,通过生成自己不会解的难题来为‘学生’版本的自己创建学习课程,从而在没有额外人工数据的情况下,成功突破了模型在解决高难度数学问题时的学习瓶颈。
Can a model learn to escape its own learning plateau? Reinforcement learning methods for finetuning large reasoning models stall on datasets with low initial success rates, and thus little training signal. We investigate a fundamental question: Can a pretrained LLM leverage latent knowledge to generate an automated curriculum for problems it cannot solve? To explore this, we design SOAR: A self-improvement framework designed to surface these pedagogical signals through meta-RL. A teacher copy of the model proposes synthetic problems for a student copy, and is rewarded with its improvement on a small subset of hard problems. Critically, SOAR grounds the curriculum in measured student progress rather than intrinsic proxy rewards. Our study on the hardest subsets of mathematical benchmarks (0/128 success) reveals three core findings. First, we show that it is possible to realize bi-level meta-RL that unlocks learning under sparse, binary rewards by sharpening a latent capacity of pretrained models to generate useful stepping stones. Second, grounded rewards outperform intrinsic reward schemes used in prior LLM self-play, reliably avoiding the instability and diversity collapse modes they typically exhibit. Third, analyzing the generated questions reveals that structural quality and well-posedness are more critical for learning progress than solution correctness. Our results suggest that the ability to generate useful stepping stones does not require the preexisting ability to actually solve the hard problems, paving a principled path to escape reasoning plateaus without additional curated data.
教模型自我教学:在可学习性边缘的推理 / Teaching Models to Teach Themselves: Reasoning at the Edge of Learnability
这篇论文提出了一个名为SOAR的自我改进框架,它让一个大语言模型扮演‘老师’,通过生成自己不会解的难题来为‘学生’版本的自己创建学习课程,从而在没有额外人工数据的情况下,成功突破了模型在解决高难度数学问题时的学习瓶颈。
源自 arXiv: 2601.18778