菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-11
📄 Abstract - MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation

This paper proposes a large-scale multi-modal dataset for referring motion expression video segmentation, focusing on segmenting and tracking target objects in videos based on language description of objects' motions. Existing referring video segmentation datasets often focus on salient objects and use language expressions rich in static attributes, potentially allowing the target object to be identified in a single frame. Such datasets underemphasize the role of motion in both videos and languages. To explore the feasibility of using motion expressions and motion reasoning clues for pixel-level video understanding, we introduce MeViS, a dataset containing 33,072 human-annotated motion expressions in both text and audio, covering 8,171 objects in 2,006 videos of complex scenarios. We benchmark 15 existing methods across 4 tasks supported by MeViS, including 6 referring video object segmentation (RVOS) methods, 3 audio-guided video object segmentation (AVOS) methods, 2 referring multi-object tracking (RMOT) methods, and 4 video captioning methods for the newly introduced referring motion expression generation (RMEG) task. The results demonstrate weaknesses and limitations of existing methods in addressing motion expression-guided video understanding. We further analyze the challenges and propose an approach LMPM++ for RVOS/AVOS/RMOT that achieves new state-of-the-art results. Our dataset provides a platform that facilitates the development of motion expression-guided video understanding algorithms in complex video scenes. The proposed MeViS dataset and the method's source code are publicly available at this https URL

顶级标签: multi-modal computer vision natural language processing
详细标签: video segmentation referring expression motion understanding benchmark dataset multi-object tracking 或 搜索:

MeViS:一个用于指代运动表达视频分割的多模态数据集 / MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation


1️⃣ 一句话总结

这篇论文提出了一个名为MeViS的大规模多模态数据集,专门用于研究如何根据语言描述的运动来分割和追踪视频中的目标物体,它弥补了现有数据集对运动信息关注不足的缺陷,并通过实验展示了现有方法的局限性,同时提供了一个促进相关算法发展的平台。


源自 arXiv: 2512.10945