菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-29
📄 Abstract - Accelerating RL Post-Training Rollouts via System-Integrated Speculative Decoding

RL post-training of frontier language models is increasingly bottlenecked by autoregressive rollout generation, making rollout acceleration a central systems challenge. Many existing efficiency methods improve throughput by changing the rollout or optimization regime, for example, through off-policy execution, replay, or lower-precision generation. We study speculative decoding as a lossless acceleration primitive for RL rollouts that preserves the target model's output distribution. We implement speculative decoding in NeMo-RL with a vLLM backend, supporting both synchronous and asynchronous pipelines and enabling speculation during RL rollouts. This benefit is realizable across speculation mechanisms, such as pretrained MTP heads, small external draft models or even techniques such as Eagle3, which are traditionally applied after RL phase. This yields a deployment path for state-of-the-art speculative decoding inside RL training. In a reasoning post-training workload at 8B scale under synchronous RL, speculative decoding improves rollout throughput by 1.8x. Using a high-fidelity performance simulator, we project that combining speculative decoding with asynchronous RL yields up to 2.5x end-to-end training speedup at 235B scale.

顶级标签: llm systems reinforcement learning
详细标签: rollout acceleration speculative decoding post-training throughput optimization asynchronous pipeline 或 搜索:

通过系统集成的推测解码加速强化学习后训练中的推理生成 / Accelerating RL Post-Training Rollouts via System-Integrated Speculative Decoding


1️⃣ 一句话总结

本文提出在强化学习后训练阶段,通过集成推测解码技术(一种无损加速方法)来加速自回归推理生成,实验证明在8B参数模型下同步RL可将推理吞吐量提升1.8倍,并预测在235B规模下异步RL可带来2.5倍的端到端训练加速。

源自 arXiv: 2604.26779