菜单

🤖 系统
📄 Abstract - RDMA Point-to-Point Communication for LLM Systems

Emerging Large Language Model (LLM) system patterns, such as disaggregated inference, Mixture-of-Experts (MoE) routing, and asynchronous reinforcement fine-tuning, require flexible point-to-point communication beyond simple collectives. Existing implementations are locked to specific Network Interface Controllers (NICs), hindering integration into inference engines and portability across hardware providers. We present TransferEngine, which bridges the functionality of common NICs to expose a uniform interface. TransferEngine exposes one-sided WriteImm operations with a ImmCounter primitive for completion notification, without ordering assumptions of network transport, transparently managing multiple NICs per GPU. We demonstrate peak throughput of 400 Gbps on both NVIDIA ConnectX-7 and AWS Elastic Fabric Adapter (EFA). We showcase TransferEngine through three production systems: (1) KvCache transfer for disaggregated inference with dynamic scaling, (2) RL weight updates achieving 1.3 seconds for trillion-parameter models, and (3) MoE dispatch/combine implementation exceeding DeepEP decode latency on ConnectX-7, with the first viable latencies on EFA. We demonstrate that our portable point-to-point communication complements collectives while avoiding lock-in.

顶级标签: systems llm model training
详细标签: rdma point-to-point communication network interface distributed systems mixture-of-experts 或 搜索:

📄 论文总结

面向大语言模型系统的RDMA点对点通信 / RDMA Point-to-Point Communication for LLM Systems


1️⃣ 一句话总结

这篇论文提出了名为TransferEngine的通用通信接口,解决了大语言模型系统中不同硬件间点对点通信不兼容的问题,实现了高性能、可移植的数据传输,并在多个实际应用中显著提升了效率。


📄 打开原文 PDF