菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-15
📄 Abstract - MaMe & MaRe: Matrix-Based Token Merging and Restoration for Efficient Visual Perception and Synthesis

Token compression is crucial for mitigating the quadratic complexity of self-attention mechanisms in Vision Transformers (ViTs), which often involve numerous input tokens. Existing methods, such as ToMe, rely on GPU-inefficient operations (e.g., sorting, scattered writes), introducing overheads that limit their effectiveness. We introduce MaMe, a training-free, differentiable token merging method based entirely on matrix operations, which is GPU-friendly to accelerate ViTs. Additionally, we present MaRe, its inverse operation, for token restoration, forming a MaMe+MaRe pipeline for image synthesis. When applied to pre-trained models, MaMe doubles ViT-B throughput with a 2% accuracy drop. Notably, fine-tuning the last layer with MaMe boosts ViT-B accuracy by 1.0% at 1.1x speed. In SigLIP2-B@512 zero-shot classification, MaMe provides 1.3x acceleration with negligible performance degradation. In video tasks, MaMe accelerates VideoMAE-L by 48.5% on Kinetics-400 with only a 0.84% accuracy loss. Furthermore, MaMe achieves simultaneous improvements in both performance and speed on some tasks. In image synthesis, the MaMe+MaRe pipeline enhances quality while reducing Stable Diffusion v2.1 generation latency by 31%. Collectively, these results demonstrate MaMe's and MaRe's effectiveness in accelerating vision models. The code is available at this https URL}{this https URL.

顶级标签: computer vision model training model evaluation
详细标签: token merging vision transformers efficient inference image synthesis video acceleration 或 搜索:

MaMe与MaRe:基于矩阵的令牌合并与恢复,用于高效视觉感知与合成 / MaMe & MaRe: Matrix-Based Token Merging and Restoration for Efficient Visual Perception and Synthesis


1️⃣ 一句话总结

这篇论文提出了一种名为MaMe的全新、无需训练且完全基于矩阵运算的令牌合并方法,以及其逆向恢复方法MaRe,它们能高效加速视觉Transformer模型,在图像分类、视频理解和图像生成等多种任务中实现显著的速度提升,同时保持甚至提升模型性能。

源自 arXiv: 2604.13432