PretrainZero:强化主动预训练 / PretrainZero: Reinforcement Active Pretraining
1️⃣ 一句话总结
这篇论文提出了一个名为PretrainZero的强化学习框架,它能让大语言模型像人类一样主动从海量无标签文本中学习,无需依赖特定领域的奖励信号,从而显著提升了模型在数学、科学等领域的通用推理能力。
Mimicking human behavior to actively learning from general experience and achieve artificial general intelligence has always been a human dream. Recent reinforcement learning (RL) based large-thinking models demonstrate impressive expert-level abilities, i.e., software and math, but still rely heavily on verifiable rewards in specific domains, placing a significant bottleneck to extend the performance boundary of general reasoning capabilities. In this work, we propose PretrainZero, a reinforcement active learning framework built on the pretraining corpus to extend RL from domain-specific post-training to general pretraining. PretrainZero features the following characteristics: 1) Active pretraining: inspired by the active learning ability of humans, PretrainZero learns a unified reasoning policy to actively identify reasonable and informative contents from pretraining corpus, and reason to predict these contents by RL. 2) Self-supervised learning: without any verifiable labels, pretrained reward models, or supervised fine-tuning, we directly pretrain reasoners from 3 to 30B base models on the general Wikipedia corpus using RL, significantly breaking the verification data-wall for general reasoning. 3) Verification scaling: by tackling increasingly challenging masked spans, PretrainZero substantially enhances the general reasoning abilities of pretrained base models. In reinforcement pretraining, PretrainZero improves Qwen3-4B-Base for 8.43, 5.96 and 10.60 on MMLU-Pro, SuperGPQA and math average benchmarks. In post-training, the pretrained models can also serve as reasoning foundation models for downstream RLVR tasks.
PretrainZero:强化主动预训练 / PretrainZero: Reinforcement Active Pretraining
这篇论文提出了一个名为PretrainZero的强化学习框架,它能让大语言模型像人类一样主动从海量无标签文本中学习,无需依赖特定领域的奖励信号,从而显著提升了模型在数学、科学等领域的通用推理能力。