菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-10
📄 Abstract - Step-resolved data attribution for looped transformers

We study how individual training examples shape the internal computation of looped transformers, where a shared block is applied for $\tau$ recurrent iterations to enable latent reasoning. Existing training-data influence estimators such as TracIn yield a single scalar score that aggregates over all loop iterations, obscuring when during the recurrent computation a training example matters. We introduce \textit{Step-Decomposed Influence (SDI)}, which decomposes TracIn into a length-$\tau$ influence trajectory by unrolling the recurrent computation graph and attributing influence to specific loop iterations. To make SDI practical at transformer scale, we propose a TensorSketch implementation that never materialises per-example gradients. Experiments on looped GPT-style models and algorithmic reasoning tasks show that SDI scales excellently, matches full-gradient baselines with low error and supports a broad range of data attribution and interpretability tasks with per-step insights into the latent reasoning process.

顶级标签: model training theory natural language processing
详细标签: data attribution transformer interpretability influence functions recurrent computation tracin 或 搜索:

循环Transformer的步骤分解数据归因 / Step-resolved data attribution for looped transformers


1️⃣ 一句话总结

这篇论文提出了一种名为‘步骤分解影响力’的新方法,它能精确追踪训练数据在循环神经网络每一步推理过程中的具体影响,从而帮助人们更好地理解AI模型内部的‘思考’过程。

源自 arXiv: 2602.10097