📄
Abstract - From independent patches to coordinated attention: Controlling information flow in vision transformers
We make the information transmitted by attention an explicit, measurable quantity in vision transformers. By inserting variational information bottlenecks on all attention-mediated writes to the residual stream -- without other architectural changes -- we train models with an explicit information cost and obtain a controllable spectrum from independent patch processing to fully expressive global attention. On ImageNet-100, we characterize how classification behavior and information routing evolve across this spectrum, and provide initial insights into how global visual representations emerge from local patch processing by analyzing the first attention heads that transmit information. By biasing learning toward solutions with constrained internal communication, our approach yields models that are more tractable for mechanistic analysis and more amenable to control.
从独立补丁到协调注意力:控制视觉Transformer中的信息流 /
From independent patches to coordinated attention: Controlling information flow in vision transformers
1️⃣ 一句话总结
这篇论文通过在视觉Transformer的注意力机制中引入信息瓶颈,可以像调节旋钮一样控制模型内部的信息交流程度,从而让模型从‘各看各的’的局部处理平滑过渡到‘协同合作’的全局注意力,这有助于我们理解和分析模型内部的工作机制。