MoSA:面向动态场景图生成的运动引导语义对齐方法 / MOSA: Motion-Guided Semantic Alignment for Dynamic Scene Graph Generation
1️⃣ 一句话总结
本文提出了一种名为MoSA的方法,通过提取物体间的运动特征(如速度、距离)并将其与空间关系融合,再借助文本语义对齐技术,显著提升了视频中物体间动态关系的识别能力,尤其对罕见关系类型的建模效果更好。
Dynamic Scene Graph Generation (DSGG) aims to structurally model objects and their dynamic interactions in video sequences for high-level semantic understanding. However, existing methods struggle with fine-grained relationship modeling, semantic representation utilization, and the ability to model tail relationships. To address these issues, this paper proposes a motion-guided semantic alignment method for DSGG (MoSA). First, a Motion Feature Extractor (MFE) encodes object-pair motion attributes such as distance, velocity, motion persistence, and directional consistency. Then, these motion attributes are fused with spatial relationship features through the Motion-guided Interaction Module (MIM) to generate motion-aware relationship representations. To further enhance semantic discrimination capabilities, the cross-modal Action Semantic Matching (ASM) mechanism aligns visual relationship features with text embeddings of relationship categories. Finally, a category-weighted loss strategy is introduced to emphasize learning of tail relationships. Extensive and rigorous testing shows that MoSA performs optimally on the Action Genome dataset.
MoSA:面向动态场景图生成的运动引导语义对齐方法 / MOSA: Motion-Guided Semantic Alignment for Dynamic Scene Graph Generation
本文提出了一种名为MoSA的方法,通过提取物体间的运动特征(如速度、距离)并将其与空间关系融合,再借助文本语义对齐技术,显著提升了视频中物体间动态关系的识别能力,尤其对罕见关系类型的建模效果更好。
源自 arXiv: 2604.19631