菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-04
📄 Abstract - Semantic Self-Distillation for Language Model Uncertainty

Large language models present challenges for principled uncertainty quantification, in part due to their complexity and the diversity of their outputs. Semantic dispersion, or the variance in the meaning of sampled answers, has been proposed as a useful proxy for model uncertainty, but the associated computational cost prohibits its use in latency-critical applications. We show that sampled semantic distributions can be distilled into lightweight student models which estimate a prompt-conditioned uncertainty before the language model generates an answer token. The student model predicts a semantic distribution over possible answers; the entropy of this distribution provides an effective uncertainty signal for hallucination prediction, and the probability density allows candidate answers to be evaluated for reliability. On TriviaQA, our student models match or outperform finite-sample semantic dispersion for hallucination prediction and provide a strong signal for out-of-domain answer detection. We term this technique Semantic Self-Distillation (SSD), which we suggest provides a general framework for distilling predictive uncertainty in complex output spaces beyond language.

顶级标签: llm model evaluation natural language processing
详细标签: uncertainty quantification semantic distillation hallucination detection model calibration knowledge distillation 或 搜索:

用于语言模型不确定性量化的语义自蒸馏 / Semantic Self-Distillation for Language Model Uncertainty


1️⃣ 一句话总结

这篇论文提出了一种名为‘语义自蒸馏’的方法,通过训练一个轻量级的学生模型来快速预测大语言模型输出答案的语义分布,从而高效地估计模型的不确定性,用于检测模型可能产生的幻觉或不可靠回答。

源自 arXiv: 2602.04577