菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-03
📄 Abstract - Communication-free Sampling and 4D Hybrid Parallelism for Scalable Mini-batch GNN Training

Graph neural networks (GNNs) are widely used for learning on graph datasets derived from various real-world scenarios. Learning from extremely large graphs requires distributed training, and mini-batching with sampling is a popular approach for parallelizing GNN training. Existing distributed mini-batch approaches have significant performance bottlenecks due to expensive sampling methods and limited scaling when using data parallelism. In this work, we present ScaleGNN, a 4D parallel framework for scalable mini-batch GNN training that combines communication-free distributed sampling, 3D parallel matrix multiplication (PMM), and data parallelism. ScaleGNN introduces a uniform vertex sampling algorithm, enabling each process (GPU device) to construct its local mini-batch, i.e., subgraph partitions without any inter-process communication. 3D PMM enables scaling mini-batch training to much larger GPU counts than vanilla data parallelism with significantly lower communication overheads. We also present additional optimizations to overlap sampling with training, reduce communication overhead by sending data in lower precision, kernel fusion, and communication-computation overlap. We evaluate ScaleGNN on five graph datasets and demonstrate strong scaling up to 2048 GPUs on Perlmutter, 2048 GCDs on Frontier, and 1024 GPUs on Tuolumne. On Perlmutter, ScaleGNN achieves 3.5x end-to-end training speedup over the SOTA baseline on ogbn-products.

顶级标签: systems model training machine learning
详细标签: graph neural networks distributed training parallel computing sampling scalability 或 搜索:

用于可扩展小批量图神经网络训练的无通信采样与四维混合并行方法 / Communication-free Sampling and 4D Hybrid Parallelism for Scalable Mini-batch GNN Training


1️⃣ 一句话总结

这篇论文提出了一个名为ScaleGNN的四维并行训练框架,它通过创新的无通信采样技术和混合并行策略,大幅提升了大规模图神经网络训练的效率,能在数千个GPU上高效运行。

源自 arXiv: 2604.02651