菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-03
📄 Abstract - Mitigating Staleness in Asynchronous Pipeline Parallelism via Basis Rotation

Asynchronous pipeline parallelism maximizes hardware utilization by eliminating the pipeline bubbles inherent in synchronous execution, offering a path toward efficient large-scale distributed training. However, this efficiency gain can be compromised by gradient staleness, where the immediate model updates with delayed gradients introduce noise into the optimization process. Crucially, we identify a critical, yet often overlooked, pathology: this delay scales linearly with pipeline depth, fundamentally undermining the very scalability that the method originally intends to provide. In this work, we investigate this inconsistency and bridge the gap by rectifying delayed gradients through basis rotation, restoring scalable asynchronous training while maintaining performance. Specifically, we observe that the deleterious effects of delayed gradients are exacerbated when the Hessian eigenbasis is misaligned with the standard coordinate basis. We demonstrate that this misalignment prevents coordinate-wise adaptive schemes, such as Adam, from effectively leveraging curvature-aware adaptivity. This failure leads to significant oscillations in the optimization trajectory and, consequently, slower convergence. We substantiate these findings through both rigorous theoretical analysis and empirical evaluation. To address this challenge, we propose the use of basis rotation, demonstrating that it effectively mitigates the alignment issue and significantly accelerates convergence in asynchronous settings. For example, our training of a 1B-parameter LLM with basis rotation achieves the same training loss in 76.8% fewer iterations compared to the best-performing asynchronous pipeline parallel training baseline.

顶级标签: systems model training machine learning
详细标签: pipeline parallelism gradient staleness distributed training optimization asynchronous training 或 搜索:

通过基旋转缓解异步流水线并行中的梯度陈旧性问题 / Mitigating Staleness in Asynchronous Pipeline Parallelism via Basis Rotation


1️⃣ 一句话总结

这篇论文发现异步流水线并行训练中,梯度延迟会随着模型深度线性增长并严重拖慢收敛,提出通过一种名为‘基旋转’的数学变换来校正延迟梯度,从而在保持高效硬件利用率的同时,显著加速大规模模型的训练速度。

源自 arXiv: 2602.03515