📄
Abstract - ReMoRa: Multimodal Large Language Model based on Refined Motion Representation for Long-Video Understanding
While multimodal large language models (MLLMs) have shown remarkable success across a wide range of tasks, long-form video understanding remains a significant challenge. In this study, we focus on video understanding by MLLMs. This task is challenging because processing a full stream of RGB frames is computationally intractable and highly redundant, as self-attention have quadratic complexity with sequence length. In this paper, we propose ReMoRa, a video MLLM that processes videos by operating directly on their compressed representations. A sparse set of RGB keyframes is retained for appearance, while temporal dynamics are encoded as a motion representation, removing the need for sequential RGB frames. These motion representations act as a compact proxy for optical flow, capturing temporal dynamics without full frame decoding. To refine the noise and low fidelity of block-based motions, we introduce a module to denoise and generate a fine-grained motion representation. Furthermore, our model compresses these features in a way that scales linearly with sequence length. We demonstrate the effectiveness of ReMoRa through extensive experiments across a comprehensive suite of long-video understanding benchmarks. ReMoRa outperformed baseline methods on multiple challenging benchmarks, including LongVideoBench, NExT-QA, and MLVU.
ReMoRa:基于精细化运动表征的多模态大语言模型,用于长视频理解 /
ReMoRa: Multimodal Large Language Model based on Refined Motion Representation for Long-Video Understanding
1️⃣ 一句话总结
这篇论文提出了一种名为ReMoRa的新模型,它通过直接处理视频压缩后的运动表征而非大量原始图像帧,高效地解决了多模态大模型理解长视频时计算量过大的难题,并在多个长视频理解测试中取得了领先效果。