通过视觉记忆机制扩展多模态大语言模型的长视频理解能力 / Scaling the Long Video Understanding of Multimodal Large Language Models via Visual Memory Mechanism
1️⃣ 一句话总结
这篇论文提出了一种名为FlexMem的无训练新方法,通过模拟人类观看视频时不断回忆相关片段的视觉记忆机制,使多模态大语言模型能够理解超长甚至无限长度的视频内容,并在单张消费级显卡上高效处理超过1000帧的视频。
Long video understanding is a key challenge that plagues the advancement of \emph{Multimodal Large language Models} (MLLMs). In this paper, we study this problem from the perspective of visual memory mechanism, and proposed a novel and training-free approach, termed \emph{Flexible Memory} (\textbf{FlexMem}). In principle, FlexMem aims to mimic human behavior of video watching, \emph{i.e.}, continually watching video content and recalling the most relevant memory fragments to answer the question. In this way, FlexMem can help MLLMs achieve video understanding of infinite lengths, unlike previous methods that process all video information at once and have input upper-limit. Concretely, FlexMem first consider the visual KV caches as the memory sources, and realize the effective memory transfer and writing via a dual-pathway compression design. Afterwards, FlexMem also explores different memory reading strategies for the diverse video understanding tasks, including the popular streaming one. To validate FlexMem, we apply it to two popular video-MLLMs, and conduct extensive experiments on five long video and one streaming video task. The experimental results show that on \textbf{a single 3090 GPU}, our FlexMem can achieve obvious improvements than existing efficient video understanding methods and process more than \textbf{1k frames}, which also helps the base MLLMs achieve comparable or even better performance than SOTA MLLMs on some benchmarks, \emph{e.g.} , GPT-4o and Gemini-1.5 Pro.
通过视觉记忆机制扩展多模态大语言模型的长视频理解能力 / Scaling the Long Video Understanding of Multimodal Large Language Models via Visual Memory Mechanism
这篇论文提出了一种名为FlexMem的无训练新方法,通过模拟人类观看视频时不断回忆相关片段的视觉记忆机制,使多模态大语言模型能够理解超长甚至无限长度的视频内容,并在单张消费级显卡上高效处理超过1000帧的视频。
源自 arXiv: 2603.29252