菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-08
📄 Abstract - SCALER:Synthetic Scalable Adaptive Learning Environment for Reasoning

Reinforcement learning (RL) offers a principled way to enhance the reasoning capabilities of large language models, yet its effectiveness hinges on training signals that remain informative as models evolve. In practice, RL progress often slows when task difficulty becomes poorly aligned with model capability, or when training is dominated by a narrow set of recurring problem patterns. To jointly address these issues, we propose SCALER (Synthetic sCalable Adaptive Learning Environment for Reasoning), a framework that sustains effective learning signals through adaptive environment design. SCALER introduces a scalable synthesis pipeline that converts real-world programming problems into verifiable reasoning environments with controllable difficulty and unbounded instance generation, enabling RL training beyond finite datasets while preserving strong correctness guarantees. Building on this, SCALER further employs an adaptive multi-environment RL strategy that dynamically adjusts instance difficulty and curates the active set of environments to track the model's capability frontier and maintain distributional diversity. This co-adaptation prevents reward sparsity, mitigates overfitting to narrow task patterns, and supports sustained improvement throughout training. Extensive experiments show that SCALER consistently outperforms dataset-based RL baselines across diverse reasoning benchmarks and exhibits more stable, long-horizon training dynamics.

顶级标签: reinforcement learning llm model training
详细标签: adaptive environment design reasoning synthetic data generation curriculum learning multi-environment rl 或 搜索:

SCALER:用于推理的合成可扩展自适应学习环境 / SCALER:Synthetic Scalable Adaptive Learning Environment for Reasoning


1️⃣ 一句话总结

这篇论文提出了一个名为SCALER的框架,它通过自动生成难度可控且无限量的推理问题来训练大型语言模型,并动态调整训练内容的难度和多样性,从而让模型在强化学习中能够持续、稳定地提升其推理能力,避免了传统方法中训练信号失效或过拟合的问题。

源自 arXiv: 2601.04809