菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - Interpretable Motion-Attentive Maps: Spatio-Temporally Localizing Concepts in Video Diffusion Transformers

Video Diffusion Transformers (DiTs) have been synthesizing high-quality video with high fidelity from given text descriptions involving motion. However, understanding how Video DiTs convert motion words into video remains insufficient. Furthermore, while prior studies on interpretable saliency maps primarily target objects, motion-related behavior in Video DiTs remains largely unexplored. In this paper, we investigate concrete motion features that specify when and which object moves for a given motion concept. First, to spatially localize, we introduce GramCol, which adaptively produces per-frame saliency maps for any text concept, including both motion and non-motion. Second, we propose a motion-feature selection algorithm to obtain an Interpretable Motion-Attentive Map (IMAP) that localizes motion spatially and temporally. Our method discovers concept saliency maps without the need for any gradient calculation or parameter update. Experimentally, our method shows outstanding localization capability on the motion localization task and zero-shot video semantic segmentation, providing interpretable and clearer saliency maps for both motion and non-motion concepts.

顶级标签: computer vision multi-modal model evaluation
详细标签: video diffusion transformers interpretability saliency maps motion localization video understanding 或 搜索:

可解释的运动注意力图:在视频扩散Transformer中定位时空概念 / Interpretable Motion-Attentive Maps: Spatio-Temporally Localizing Concepts in Video Diffusion Transformers


1️⃣ 一句话总结

这篇论文提出了一种无需梯度计算的新方法,能够自动生成视频中物体运动和静态概念的时空定位图,从而清晰地解释视频生成模型如何将文字描述转化为具体的动态画面。

源自 arXiv: 2603.02919