菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-13
📄 Abstract - Relax: An Asynchronous Reinforcement Learning Engine for Omni-Modal Post-Training at Scale

Reinforcement learning (RL) post-training has proven effective at unlocking reasoning, self-reflection, and tool-use capabilities in large language models. As models extend to omni-modal inputs and agentic multi-turn workflows, RL training systems face three interdependent challenges: heterogeneous data flows, operational robustness at scale, and the staleness -- throughput tradeoff. We present \textbf{Relax} (Reinforcement Engine Leveraging Agentic X-modality), an open-source RL training engine that addresses these challenges through three co-designed architectural layers. First, an \emph{omni-native architecture} builds multimodal support into the full stack -- from data preprocessing and modality-aware parallelism to inference generation -- rather than retrofitting it onto a text-centric pipeline. Second, each RL role runs as an independent, fault-isolated service that can be scaled, recovered, and upgraded without global coordination. Third, service-level decoupling enables asynchronous training via the TransferQueue data bus, where a single staleness parameter smoothly interpolates among on-policy, near-on-policy, and fully asynchronous execution. Relax achieves a 1.20$\times$ end-to-end speedup over veRL on Qwen3-4B on-policy training. Its fully async mode delivers a 1.76$\times$ speedup over colocate on Qwen3-4B and a 2.00$\times$ speedup on Qwen3-Omni-30B, while all modes converge to the same reward level. Relax supports R3 (Rollout Routing Replay)~\cite{ma2025r3} for MoE models with only 1.9\% overhead, compared to 32\% degradation in veRL under the same configuration. It further demonstrates stable omni-modal RL convergence on Qwen3-Omni across image, text, and audio, sustaining over 2{,}000 steps on video without degradation. Relax is available at this https URL.

顶级标签: reinforcement learning model training systems
详细标签: asynchronous rl multimodal training rl engine post-training scalable systems 或 搜索:

Relax:一个用于大规模全模态后训练的异步强化学习引擎 / Relax: An Asynchronous Reinforcement Learning Engine for Omni-Modal Post-Training at Scale


1️⃣ 一句话总结

这篇论文提出了一个名为Relax的开源强化学习训练引擎,它通过创新的三层架构设计,解决了大模型在多模态和长流程任务中进行强化学习后训练时遇到的数据异构、大规模运行稳定性以及训练效率与数据新鲜度难以兼顾的三大核心挑战,显著提升了训练速度和系统稳定性。

源自 arXiv: 2604.11554