GATES:基于特权上下文与共识门控的自蒸馏方法 / GATES: Self-Distillation under Privileged Context with Consensus Gating
1️⃣ 一句话总结
这篇论文提出了一种名为GATES的自蒸馏方法,它通过让模型在训练时扮演能看到参考文档的‘导师’角色,并利用多个导师答案之间的共识作为可靠的学习信号,来指导看不到文档的‘学生’模型学习完整的推理过程,从而在没有外部监督的情况下,显著提升了模型在文档缺失场景下的问答准确率。
We study self-distillation in settings where supervision is unreliable: there are no ground truth labels, verifiable rewards, or external graders to evaluate answers. We focus on document-grounded question answering with asymmetric context, where a single model serves as both tutor (with access to a relevant source document during training) and student (answering from the question alone at test time). Rather than assuming tutor correctness, we derive supervision online from tutor consensus by sampling multiple document-grounded reasoning traces and using agreement to gate learning. Conditioned on this reliability signal, we distill knowledge through full tutor reasoning trajectories (not just final answers), providing a dense and stable learning signal. Empirically, this consensus-gated trajectory distillation substantially improves transfer to the document-free student. Held-out in-domain accuracy under asymmetric evaluation improves from 46.0\% to 62.0\%, and average (maj@8) accuracy on public document-free math benchmarks improves from 20.2\% to 35.4\%.
GATES:基于特权上下文与共识门控的自蒸馏方法 / GATES: Self-Distillation under Privileged Context with Consensus Gating
这篇论文提出了一种名为GATES的自蒸馏方法,它通过让模型在训练时扮演能看到参考文档的‘导师’角色,并利用多个导师答案之间的共识作为可靠的学习信号,来指导看不到文档的‘学生’模型学习完整的推理过程,从而在没有外部监督的情况下,显著提升了模型在文档缺失场景下的问答准确率。
源自 arXiv: 2602.20574