菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-04
📄 Abstract - EtCon: Edit-then-Consolidate for Reliable Knowledge Editing

Knowledge editing aims to update specific facts in large language models (LLMs) without full retraining. Prior efforts sought to tune the knowledge layers of LLMs, proving effective for making selective edits. However, a significant gap exists between their performance in controlled, teacher-forcing evaluations and their real-world effectiveness in lifelong learning scenarios, which greatly limits their practical applicability. This work's empirical analysis reveals two recurring issues associated with this gap: (1) Most traditional methods lead the edited model to overfit to the new fact, thereby degrading pre-trained capabilities; (2) There is a critical absence of a knowledge consolidation stage, leaving new facts insufficiently integrated into LLMs' inference-time behavior under autoregressive generation, thereby leading to a mismatch between parametric knowledge and actual generation behavior. To this end, we propose Edit-then-Consolidate, a novel knowledge editing paradigm that aims to bridge the gap between theoretical knowledge editing methods and their real-world applicability. Specifically, (1) our framework mitigates overfitting via Targeted Proximal Supervised Fine-Tuning (TPSFT) that localizes the edit via a trust-region objective to limit policy drift; (2) Then, a consolidation stage using Group Relative Policy Optimization (GRPO) aligns the edited knowledge with CoT-based inference policy by optimizing trajectory-level behavior under comprehensive reward signals. Extensive experiments demonstrate our framework consistently improves editing reliability and generalization under real-world evaluations, while better preserving locality and pre-trained capabilities.

顶级标签: llm model training model evaluation
详细标签: knowledge editing lifelong learning policy optimization fine-tuning reliability 或 搜索:

EtCon:先编辑后巩固——实现可靠的知识编辑 / EtCon: Edit-then-Consolidate for Reliable Knowledge Editing


1️⃣ 一句话总结

这篇论文提出了一种名为‘先编辑后巩固’的新方法,通过先精准修改大语言模型中的特定知识,再强化巩固,有效解决了传统知识编辑技术容易导致模型过拟合、新知识难以在实际应用中生效的问题,从而显著提升了模型编辑的可靠性和实用性。


源自 arXiv: 2512.04753