大语言模型在持续预训练中如何学习概念? / How Do Large Language Models Learn Concepts During Continual Pre-Training?
1️⃣ 一句话总结
这篇论文通过分析大语言模型内部的‘概念电路’,揭示了模型在持续学习新知识时如何获取、遗忘以及让不同概念相互影响的具体动态过程,为设计更可解释和稳健的模型训练方法提供了新视角。
Human beings primarily understand the world through concepts (e.g., dog), abstract mental representations that structure perception, reasoning, and learning. However, how large language models (LLMs) acquire, retain, and forget such concepts during continual pretraining remains poorly understood. In this work, we study how individual concepts are acquired and forgotten, as well as how multiple concepts interact through interference and synergy. We link these behavioral dynamics to LLMs' internal Concept Circuits, computational subgraphs associated with specific concepts, and incorporate Graph Metrics to characterize circuit structure. Our analysis reveals: (1) LLMs concept circuits provide a non-trivial, statistically significant signal of concept learning and forgetting; (2) Concept circuits exhibit a stage-wise temporal pattern during continual pretraining, with an early increase followed by gradual decrease and stabilization; (3) concepts with larger learning gains tend to exhibit greater forgetting under subsequent training; (4) semantically similar concepts induce stronger interference than weakly related ones; (5) conceptual knowledge differs in their transferability, with some significantly facilitating the learning of others. Together, our findings offer a circuit-level view of concept learning dynamics and inform the design of more interpretable and robust concept-aware training strategies for LLMs.
大语言模型在持续预训练中如何学习概念? / How Do Large Language Models Learn Concepts During Continual Pre-Training?
这篇论文通过分析大语言模型内部的‘概念电路’,揭示了模型在持续学习新知识时如何获取、遗忘以及让不同概念相互影响的具体动态过程,为设计更可解释和稳健的模型训练方法提供了新视角。
源自 arXiv: 2601.03570