通过分而治之推理训练大语言模型提升测试时扩展性 / Training LLMs for Divide-and-Conquer Reasoning Elevates Test-Time Scalability
1️⃣ 一句话总结
这篇论文提出了一种新的强化学习训练框架,教会大语言模型像‘庖丁解牛’一样,先将复杂问题拆分成多个子问题逐一解决,再整合答案,从而在应对高难度任务时比传统‘一步步想’的方法表现更好、扩展性更强。
Large language models (LLMs) have demonstrated strong reasoning capabilities through step-by-step chain-of-thought (CoT) reasoning. Nevertheless, at the limits of model capability, CoT often proves insufficient, and its strictly sequential nature constrains test-time scalability. A potential alternative is divide-and-conquer (DAC) reasoning, which decomposes a complex problem into subproblems to facilitate more effective exploration of the solution. Although promising, our analysis reveals a fundamental misalignment between general-purpose post-training and DAC-style inference, which limits the model's capacity to fully leverage this potential. To bridge this gap and fully unlock LLMs' reasoning capabilities on the most challenging tasks, we propose an end-to-end reinforcement learning (RL) framework to enhance their DAC-style reasoning capacity. At each step, the policy decomposes a problem into a group of subproblems, solves them sequentially, and addresses the original one conditioned on the subproblem solutions, with both decomposition and solution integrated into RL training. Under comparable training, our DAC-style framework endows the model with a higher performance ceiling and stronger test-time scalability, surpassing CoT by 8.6% in Pass@1 and 6.3% in Pass@32 on competition-level benchmarks.
通过分而治之推理训练大语言模型提升测试时扩展性 / Training LLMs for Divide-and-Conquer Reasoning Elevates Test-Time Scalability
这篇论文提出了一种新的强化学习训练框架,教会大语言模型像‘庖丁解牛’一样,先将复杂问题拆分成多个子问题逐一解决,再整合答案,从而在应对高难度任务时比传统‘一步步想’的方法表现更好、扩展性更强。
源自 arXiv: 2602.02477