菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-02
📄 Abstract - KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models

Knowledge distillation (KD) is an essential technique to compress large language models (LLMs) into smaller ones. However, despite the distinct roles of the student model and the teacher model in KD, most existing frameworks still use a homogeneous training backend (e.g., FSDP and DeepSpeed) for both models, leading to suboptimal training efficiency. In this paper, we present a novel framework for LLM distillation, termed \textbf{KDFlow}, which features a decoupled architecture and employs SGLang for teacher inference. By bridging the training efficiency of FSDP2 and the inference efficiency of SGLang, KDFlow achieves full utilization of both advantages in a unified system. Moreover, instead of transferring full logits across different processes, our framework only transmits the teacher's hidden states using zero-copy data transfer and recomputes the logits on the student side, effectively balancing the communication cost and KD performance. Furthermore, our framework supports both off-policy and on-policy distillation and incorporates KD algorithms for cross-tokenizer KD through highly extensible and user-friendly APIs. Experiments show that KDFlow can achieve \textbf{1.44$\times$ to 6.36$\times$} speedup compared to current KD frameworks, enabling researchers to rapidly prototype and scale LLM distillation with minimal engineering overhead. Code is available at: this https URL

顶级标签: llm model training systems
详细标签: knowledge distillation training efficiency inference optimization framework large language models 或 搜索:

KDFlow:一个面向大语言模型、用户友好且高效的知识蒸馏框架 / KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models


1️⃣ 一句话总结

这篇论文提出了一个名为KDFlow的新框架,它通过解耦教师模型推理和学生模型训练的架构,并采用创新的数据传输策略,显著提升了大语言模型知识蒸馏过程的效率和易用性。

源自 arXiv: 2603.01875