深入探究大语言模型在表格理解中的内部机制 / A Closer Look into LLMs for Table Understanding
1️⃣ 一句话总结
这篇论文通过实证研究发现,大语言模型理解表格数据时遵循一个三阶段的注意力模式,并且表格任务比数学推理需要更深层的网络处理,同时揭示了混合专家模型在表格理解中如何激活特定专家。
Despite the success of Large Language Models (LLMs) in table understanding, their internal mechanisms remain unclear. In this paper, we conduct an empirical study on 16 LLMs, covering general LLMs, specialist tabular LLMs, and Mixture-of-Experts (MoE) models, to explore how LLMs understand tabular data and perform downstream tasks. Our analysis focus on 4 dimensions including the attention dynamics, the effective layer depth, the expert activation, and the impacts of input designs. Key findings include: (1) LLMs follow a three-phase attention pattern -- early layers scan the table broadly, middle layers localize relevant cells, and late layers amplify their contributions; (2) tabular tasks require deeper layers than math reasoning to reach stable predictions; (3) MoE models activate table-specific experts in middle layers, with early and late layers sharing general-purpose experts; (4) Chain-of-Thought prompting increases table attention, further enhanced by table-tuning. We hope these findings and insights can facilitate interpretability and future research on table-related tasks.
深入探究大语言模型在表格理解中的内部机制 / A Closer Look into LLMs for Table Understanding
这篇论文通过实证研究发现,大语言模型理解表格数据时遵循一个三阶段的注意力模式,并且表格任务比数学推理需要更深层的网络处理,同时揭示了混合专家模型在表格理解中如何激活特定专家。
源自 arXiv: 2603.15402