ReSyn:为推理模型自主扩展合成环境 / ReSyn: Autonomously Scaling Synthetic Environments for Reasoning Models
1️⃣ 一句话总结
这篇论文提出了一个名为ReSyn的自动化系统,它能大规模生成多样化的推理任务环境,并利用这些环境来训练语言模型,显著提升了模型在数学和逻辑推理等复杂任务上的表现。
Reinforcement learning with verifiable rewards (RLVR) has emerged as a promising approach for training reasoning language models (RLMs) by leveraging supervision from verifiers. Although verifier implementation is easier than solution annotation for many tasks, existing synthetic data generation methods remain largely solution-centric, while verifier-based methods rely on a few hand-crafted procedural environments. In this work, we scale RLVR by introducing ReSyn, a pipeline that generates diverse reasoning environments equipped with instance generators and verifiers, covering tasks such as constraint satisfaction, algorithmic puzzles, and spatial reasoning. A Qwen2.5-7B-Instruct model trained with RL on ReSyn data achieves consistent gains across reasoning benchmarks and out-of-domain math benchmarks, including a 27\% relative improvement on the challenging BBEH benchmark. Ablations show that verifier-based supervision and increased task diversity both contribute significantly, providing empirical evidence that generating reasoning environments at scale can enhance reasoning abilities in RLMs
ReSyn:为推理模型自主扩展合成环境 / ReSyn: Autonomously Scaling Synthetic Environments for Reasoning Models
这篇论文提出了一个名为ReSyn的自动化系统,它能大规模生成多样化的推理任务环境,并利用这些环境来训练语言模型,显著提升了模型在数学和逻辑推理等复杂任务上的表现。
源自 arXiv: 2602.20117