SimCT:为跨分词器同策略蒸馏恢复丢失的监督信号 / SimCT: Recovering Lost Supervision for Cross-Tokenizer On-Policy Distillation
1️⃣ 一句话总结
针对教师和学生模型使用不同分词器时,传统同策略蒸馏方法会因词汇不匹配而丢失大量监督信号的问题,本文提出SimCT方法,通过引入短多词连续片段作为共同监督单元,在不改变蒸馏损失函数形式的前提下恢复了丢失信号,在数学推理和代码生成任务上显著优于现有基线方法。
On-policy distillation (OPD) is a standard tool for transferring teacher behavior to a smaller student, but it implicitly assumes that teacher and student predictions are comparable token by token, an assumption that fails whenever the two models tokenize the same text differently. Under heterogeneous tokenizers, exact shared-token matching silently discards a large fraction of the teacher signal at precisely the positions where vocabularies disagree. We propose \textbf{\underline{Sim}ple \underline{C}ross-\underline{T}okenizer OPD (SimCT)}, which restores this signal by enlarging the supervision space: alongside shared tokens, SimCT compares teacher and student over short multi-token continuations that both tokenizers can realize, leaving the OPD loss form itself unchanged. We show that these units are the finest jointly tokenizable supervision interface, and that coarser alternatives remove teacher-student distinctions that are useful for on-policy learning. Across three heterogeneous teacher-student pairs on mathematical reasoning and code-generation benchmarks, SimCT shows consistent gains over shared-vocabulary OPD and representative cross-tokenizer baselines, with ablations confirming that the improvements come from recovering supervision discarded by exact shared-token matching. Code is available at \href{this https URL}{this https URL}.
SimCT:为跨分词器同策略蒸馏恢复丢失的监督信号 / SimCT: Recovering Lost Supervision for Cross-Tokenizer On-Policy Distillation
针对教师和学生模型使用不同分词器时,传统同策略蒸馏方法会因词汇不匹配而丢失大量监督信号的问题,本文提出SimCT方法,通过引入短多词连续片段作为共同监督单元,在不改变蒸馏损失函数形式的前提下恢复了丢失信号,在数学推理和代码生成任务上显著优于现有基线方法。
源自 arXiv: 2605.07711