FreeScale:以最小扩展成本实现序列推荐模型的分布式训练 / FreeScale: Distributed Training for Sequence Recommendation Models with Minimal Scaling Cost
1️⃣ 一句话总结
FreeScale提出了一种用于大规模序列推荐模型训练的分布式系统,通过智能数据均衡、优先通信与计算重叠以及避免GPU资源竞争的技术,显著减少了训练过程中的计算等待时间,在实际部署中最高可减少90%以上的效率浪费。
Modern industrial Deep Learning Recommendation Models typically extract user preferences through the analysis of sequential interaction histories, subsequently generating predictions based on these derived interests. The inherent heterogeneity in data characteristics frequently result in substantial under-utilization of computational resources during large-scale training, primarily due to computational bubbles caused by severe stragglers and slow blocking communications. This paper introduces FreeScale, a solution designed to (1) mitigate the straggler problem through meticulously load balanced input samples (2) minimize the blocking communication by overlapping prioritized embedding communications with computations (3) resolve the GPU resource competition during computation and communication overlapping by communicating through SM-Free techniques. Empirical evaluation demonstrates that FreeScale achieves up to 90.3% reduction in computational bubbles when applied to real-world workloads running on 256 H100 GPUs.
FreeScale:以最小扩展成本实现序列推荐模型的分布式训练 / FreeScale: Distributed Training for Sequence Recommendation Models with Minimal Scaling Cost
FreeScale提出了一种用于大规模序列推荐模型训练的分布式系统,通过智能数据均衡、优先通信与计算重叠以及避免GPU资源竞争的技术,显著减少了训练过程中的计算等待时间,在实际部署中最高可减少90%以上的效率浪费。
源自 arXiv: 2604.24073