ViT-AdaLA:使用线性注意力适配视觉Transformer / ViT-AdaLA: Adapting Vision Transformers with Linear Attention
1️⃣ 一句话总结
这篇论文提出了一种名为ViT-AdaLA的新方法,它通过注意力对齐、特征对齐和微调三个步骤,高效地将现有高性能视觉大模型的知识迁移到计算效率更高的线性注意力模型中,从而在保持强大性能的同时显著降低了计算成本。
Vision Transformers (ViTs) based vision foundation models (VFMs) have achieved remarkable performance across diverse vision tasks, but suffer from quadratic complexity that limits scalability to long sequences. Existing linear attention approaches for ViTs are typically trained from scratch, requiring substantial computational resources, while linearization-based methods developed for large language model decoders do not transfer well to ViTs. To address these challenges, we propose ViT-AdaLA, a novel framework for effectively adapting and transferring prior knowledge from VFMs to linear attention ViTs. ViT-AdaLA consists of three stages: attention alignment, feature alignment, and supervised fine-tuning. In the attention alignment stage, we align vanilla linear attention with the original softmax-based attention in each block to approximate the behavior of softmax attention. However, residual approximation errors inevitably accumulate across layers. We mitigate this by fine-tuning the linearized ViT to align its final-layer features with a frozen softmax VFM teacher. Finally, the adapted prior knowledge is transferred to downstream tasks through supervised fine-tuning. Extensive experiments on classification and segmentation tasks demonstrate the effectiveness and generality of ViT-AdaLA over various state-of-the-art linear attention counterpart.
ViT-AdaLA:使用线性注意力适配视觉Transformer / ViT-AdaLA: Adapting Vision Transformers with Linear Attention
这篇论文提出了一种名为ViT-AdaLA的新方法,它通过注意力对齐、特征对齐和微调三个步骤,高效地将现有高性能视觉大模型的知识迁移到计算效率更高的线性注意力模型中,从而在保持强大性能的同时显著降低了计算成本。
源自 arXiv: 2603.16063