菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-14
📄 Abstract - Do Transformers Use their Depth Adaptively? Evidence from a Relational Reasoning Task

We investigate whether transformers use their depth adaptively across tasks of increasing difficulty. Using a controlled multi-hop relational reasoning task based on family stories, where difficulty is determined by the number of relationship hops that must be composed, we monitor (i) how predictions evolve across layers via early readouts (the logit lens) and (ii) how task-relevant information is integrated across tokens via causal patching. For pretrained models, we find some limited evidence for adaptive depth use: some larger models need fewer layers to arrive at plausible answers for easier tasks, and models generally use more layers to integrate information across tokens as chain length increases. For models finetuned on the task, we find clearer and more consistent evidence of adaptive depth use, with the effect being stronger for less constrained finetuning regimes that do not preserve general language modeling abilities.

顶级标签: llm theory model evaluation
详细标签: transformers relational reasoning depth analysis causal patching adaptive computation 或 搜索:

Transformer模型是否自适应地利用其深度?来自关系推理任务的证据 / Do Transformers Use their Depth Adaptively? Evidence from a Relational Reasoning Task


1️⃣ 一句话总结

这篇论文通过一个基于家族故事的多跳关系推理任务发现,经过微调的Transformer模型会根据任务难度自适应地调整其网络深度的使用,而预训练模型仅表现出有限的适应性。

源自 arXiv: 2604.12426