菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-14
📄 Abstract - Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning

In this report, we introduce DASD-4B-Thinking, a lightweight yet highly capable, fully open-source reasoning model. It achieves SOTA performance among open-source models of comparable scale across challenging benchmarks in mathematics, scientific reasoning, and code generation -- even outperforming several larger models. We begin by critically reexamining a widely adopted distillation paradigm in the community: SFT on teacher-generated responses, also known as sequence-level distillation. Although a series of recent works following this scheme have demonstrated remarkable efficiency and strong empirical performance, they are primarily grounded in the SFT perspective. Consequently, these approaches focus predominantly on designing heuristic rules for SFT data filtering, while largely overlooking the core principle of distillation itself -- enabling the student model to learn the teacher's full output distribution so as to inherit its generalization capability. Specifically, we identify three critical limitations in current practice: i) Inadequate representation of the teacher's sequence-level distribution; ii) Misalignment between the teacher's output distribution and the student's learning capacity; and iii) Exposure bias arising from teacher-forced training versus autoregressive inference. In summary, these shortcomings reflect a systemic absence of explicit teacher-student interaction throughout the distillation process, leaving the essence of distillation underexploited. To address these issues, we propose several methodological innovations that collectively form an enhanced sequence-level distillation training pipeline. Remarkably, DASD-4B-Thinking obtains competitive results using only 448K training samples -- an order of magnitude fewer than those employed by most existing open-source efforts. To support community research, we publicly release our models and the training dataset.

顶级标签: llm model training model evaluation
详细标签: knowledge distillation reasoning sequence distillation long chain-of-thought data efficiency 或 搜索:

面向卓越长链推理的分布对齐序列蒸馏 / Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning


1️⃣ 一句话总结

这篇论文提出了一个名为DASD-4B-Thinking的新型轻量级开源推理模型,它通过改进传统的序列蒸馏方法,解决了教师模型输出分布与学生模型学习能力不匹配等核心问题,从而仅用少量训练数据就在数学、科学推理和代码生成等多项任务上达到了领先的开源模型性能。

源自 arXiv: 2601.09088