RLHFless:基于无服务器计算的高效RLHF框架 / RLHFless: Serverless Computing for Efficient RLHF
1️⃣ 一句话总结
这篇论文提出了一个名为RLHFless的新框架,它利用无服务器计算技术来动态调配资源,从而显著提升了基于人类反馈的强化学习(RLHF)的训练效率并降低了成本。
Reinforcement Learning from Human Feedback (RLHF) has been widely applied to Large Language Model (LLM) post-training to align model outputs with human preferences. Recent models, such as DeepSeek-R1, have also shown RLHF's potential to improve LLM reasoning on complex tasks. In RL, inference and training co-exist, creating dynamic resource demands throughout the workflow. Compared to traditional RL, RLHF further challenges training efficiency due to expanding model sizes and resource consumption. Several RLHF frameworks aim to balance flexible abstraction and efficient execution. However, they rely on serverful infrastructures, which struggle with fine-grained resource variability. As a result, during synchronous RLHF training, idle time between or within RL components often causes overhead and resource wastage. To address these issues, we present RLHFless, the first scalable training framework for synchronous RLHF, built on serverless computing environments. RLHFless adapts to dynamic resource demands throughout the RLHF pipeline, pre-computes shared prefixes to avoid repeated computation, and uses a cost-aware actor scaling strategy that accounts for response length variation to find sweet spots with lower cost and higher speed. In addition, RLHFless assigns workloads efficiently to reduce intra-function imbalance and idle time. Experiments on both physical testbeds and a large-scale simulated cluster show that RLHFless achieves up to 1.35x speedup and 44.8% cost reduction compared to the state-of-the-art baseline.
RLHFless:基于无服务器计算的高效RLHF框架 / RLHFless: Serverless Computing for Efficient RLHF
这篇论文提出了一个名为RLHFless的新框架,它利用无服务器计算技术来动态调配资源,从而显著提升了基于人类反馈的强化学习(RLHF)的训练效率并降低了成本。
源自 arXiv: 2602.22718