熵自适应微调:解决置信冲突以缓解遗忘 / Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting
1️⃣ 一句话总结
这篇论文提出了一种名为‘熵自适应微调’的新方法,通过识别并抑制模型自身高度自信但与外部监督相冲突的‘置信冲突’数据,在保持模型下游任务性能的同时,有效缓解了传统监督微调导致的灾难性遗忘问题。
Supervised Fine-Tuning (SFT) is the standard paradigm for domain adaptation, yet it frequently incurs the cost of catastrophic forgetting. In sharp contrast, on-policy Reinforcement Learning (RL) effectively preserves general capabilities. We investigate this discrepancy and identify a fundamental distributional gap: while RL aligns with the model's internal belief, SFT forces the model to fit external supervision. This mismatch often manifests as "Confident Conflicts" tokens characterized by low probability but low entropy. In these instances, the model is highly confident in its own prediction but is forced to learn a divergent ground truth, triggering destructive gradient updates. To address this, we propose Entropy-Adaptive Fine-Tuning (EAFT). Unlike methods relying solely on prediction probability, EAFT utilizes token-level entropy as a gating mechanism to distinguish between epistemic uncertainty and knowledge conflict. This allows the model to learn from uncertain samples while suppressing gradients on conflicting data. Extensive experiments on Qwen and GLM series (ranging from 4B to 32B parameters) across mathematical, medical, and agentic domains confirm our hypothesis. EAFT consistently matches the downstream performance of standard SFT while significantly mitigating the degradation of general capabilities.
熵自适应微调:解决置信冲突以缓解遗忘 / Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting
这篇论文提出了一种名为‘熵自适应微调’的新方法,通过识别并抑制模型自身高度自信但与外部监督相冲突的‘置信冲突’数据,在保持模型下游任务性能的同时,有效缓解了传统监督微调导致的灾难性遗忘问题。
源自 arXiv: 2601.02151