菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-12
📄 Abstract - VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding

This paper presents VideoLoom, a unified Video Large Language Model (Video LLM) for joint spatial-temporal understanding. To facilitate the development of fine-grained spatial and temporal localization capabilities, we curate LoomData-8.7k, a human-centric video dataset with temporally grounded and spatially localized captions. With this, VideoLoom achieves state-of-the-art or highly competitive performance across a variety of spatial and temporal benchmarks (e.g., 63.1 J&F on ReVOS for referring video object segmentation, and 48.3 R1@0.7 on Charades-STA for temporal grounding). In addition, we introduce LoomBench, a novel benchmark consisting of temporal, spatial, and compositional video-question pairs, enabling a comprehensive evaluation of Video LLMs from diverse aspects. Collectively, these contributions offer a universal and effective suite for joint spatial-temporal video understanding, setting a new standard in multimodal intelligence.

顶级标签: multi-modal video model evaluation
详细标签: video llm spatial-temporal understanding benchmark video dataset multimodal intelligence 或 搜索:

VideoLoom:一个用于联合时空理解的视频大语言模型 / VideoLoom: A Video Large Language Model for Joint Spatial-Temporal Understanding


1️⃣ 一句话总结

这篇论文提出了一个名为VideoLoom的视频大语言模型,它通过构建一个带精细标注的数据集和新评测基准,能够同时理解视频中物体在空间上的位置和动作在时间上的变化,并在多项视频理解任务中取得了领先或极具竞争力的性能。

源自 arXiv: 2601.07290