什么有帮助,什么有害:视觉Transformer的双向解释 / What Helps -- and What Hurts: Bidirectional Explanations for Vision Transformers
1️⃣ 一句话总结
这篇论文提出了一种名为BiCAM的新方法,它不仅能揭示图像中哪些区域支持视觉Transformer模型的预测,还能揭示哪些区域抑制了预测,从而提供更完整、对比更鲜明的解释,并有助于快速检测对抗性样本。
Vision Transformers (ViTs) achieve strong performance in visual recognition, yet their decision-making remains difficult to interpret. We propose BiCAM, a bidirectional class activation mapping method that captures both supportive (positive) and suppressive (negative) contributions to model predictions. Unlike prior CAM-based approaches that discard negative signals, BiCAM preserves signed attributions to produce more complete and contrastive explanations. BiCAM further introduces a Positive-to-Negative Ratio (PNR) that summarizes attribution balance and enables lightweight detection of adversarial examples without retraining. Across ImageNet, VOC, and COCO, BiCAM improves localization and faithfulness while remaining computationally efficient. It generalizes to multiple ViT variants, including DeiT and Swin. These results suggest the importance of modeling both supportive and suppressive evidence for interpreting transformer-based vision models.
什么有帮助,什么有害:视觉Transformer的双向解释 / What Helps -- and What Hurts: Bidirectional Explanations for Vision Transformers
这篇论文提出了一种名为BiCAM的新方法,它不仅能揭示图像中哪些区域支持视觉Transformer模型的预测,还能揭示哪些区域抑制了预测,从而提供更完整、对比更鲜明的解释,并有助于快速检测对抗性样本。
源自 arXiv: 2603.01605