C$^2$DLM:因果概念引导的扩散大语言模型 / C$^2$DLM: Causal Concept-Guided Diffusion Large Language Models
1️⃣ 一句话总结
这篇论文提出了一种新的扩散大语言模型,通过引入概念级的因果图来指导模型学习概念间的因果关系,从而显著提升了模型的推理能力和训练效率。
Autoregressive (AR) language models and Diffusion Language Models (DLMs) constitute the two principal paradigms of large language models. However, both paradigms suffer from insufficient reasoning capabilities. Human reasoning inherently relies on causal knowledge and thought, which are reflected in natural language. But in the AR paradigm, language is modeled as next token prediction (a strictly left-to-right, token-by-token order), whereas natural language itself exhibits more flexible causal structures. In the DLM paradigm, the attention mechanism is fully connected, which entirely disregards causal order. To fill this gap, we propose a \underline{\textbf{C}}ausal \underline{\textbf{C}}oncept-Guided \underline{\textbf{D}}iffusion \underline{\textbf{L}}anguage \underline{\textbf{M}}odel (C$^2$DLM). Starting from DLM's fully connected attention, C$^2$DLM first obtains a concept-level causal graph from the teacher model, and then explicitly guides attention to learn causal relationships between concepts. By focusing on causal relationships and avoiding interference from difficult subgoals involving causal inversion, C$^2$DLM improves 12\% with about 3.2 times training speedup in the COT-OrderPerturb task, and achieves an average gain of 1.31\% across six downstream reasoning tasks. More details in the repository ~\href{this https URL}{here}.
C$^2$DLM:因果概念引导的扩散大语言模型 / C$^2$DLM: Causal Concept-Guided Diffusion Large Language Models
这篇论文提出了一种新的扩散大语言模型,通过引入概念级的因果图来指导模型学习概念间的因果关系,从而显著提升了模型的推理能力和训练效率。