菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-15
📄 Abstract - TraPO: A Semi-Supervised Reinforcement Learning Framework for Boosting LLM Reasoning

Reinforcement learning with verifiable rewards (RLVR) has proven effective in training large reasoning models (LRMs) by leveraging answer-verifiable signals to guide policy optimization, which, however, suffers from high annotation costs. To alleviate this problem, recent work has explored unsupervised RLVR methods that derive rewards solely from the model's internal consistency, such as through entropy and majority voting. While seemingly promising, these methods often suffer from model collapse in the later stages of training, which may arise from the reinforcement of incorrect reasoning patterns in the absence of external supervision. In this work, we investigate a novel semi-supervised RLVR paradigm that utilizes a small labeled set to guide RLVR training on unlabeled samples. Our key insight is that supervised rewards are essential for stabilizing consistency-based training on unlabeled samples, ensuring that only reasoning patterns verified on labeled instances are incorporated into RL training. Technically, we propose an effective policy optimization algorithm, TraPO, that identifies reliable unlabeled samples by matching their learning trajectory similarity to labeled ones. Building on this, TraPO achieves remarkable data efficiency and strong generalization on six widely used mathematical reasoning benchmarks (AIME24/25, AMC, MATH-500, Minerva, and Olympiad) and three out-of-distribution tasks (ARC-c, GPQA-diamond, and MMLU-pro). With only 1K labeled and 3K unlabeled samples, TraPO reaches 42.6% average accuracy, surpassing the best unsupervised method trained on 45K unlabeled samples (38.3%). Notably, when using 4K labeled and 12K unlabeled samples, TraPO even outperforms the fully supervised model trained on the full 45K labeled samples on all benchmarks, while using only 10% of the labeled data. The code is available via this https URL.

顶级标签: llm reinforcement learning model training
详细标签: semi-supervised learning reasoning policy optimization mathematical reasoning data efficiency 或 搜索:

TraPO:一种用于提升大语言模型推理能力的半监督强化学习框架 / TraPO: A Semi-Supervised Reinforcement Learning Framework for Boosting LLM Reasoning


1️⃣ 一句话总结

这篇论文提出了一种名为TraPO的半监督强化学习方法,它巧妙地结合少量标注数据和大量未标注数据来训练大语言模型进行推理,在显著降低数据标注成本的同时,有效防止了模型训练崩溃,并在多个数学推理任务上取得了超越全监督方法的性能。


源自 arXiv: 2512.13106