STRIDE:流式视频理解中何时发言与序列去噪的结合 / STRIDE: When to Speak Meets Sequence Denoising for Streaming Video Understanding
1️⃣ 一句话总结
这篇论文提出了一种名为STRIDE的新方法,它通过一个轻量级的序列去噪模块,让AI在观看实时流式视频时,能更准确、更连贯地判断出应该在哪个最佳时机主动发言或做出反应。
Recent progress in video large language models (Video-LLMs) has enabled strong offline reasoning over long and complex videos. However, real-world deployments increasingly require streaming perception and proactive interaction, where video frames arrive online and the system must decide not only what to respond, but also when to respond. In this work, we revisit proactive activation in streaming video as a structured sequence modeling problem, motivated by the observation that temporal transitions in streaming video naturally form span-structured activation patterns. To capture this span-level structure, we model activation signals jointly over a sliding temporal window and update them iteratively as new frames arrive. We propose STRIDE (Structured Temporal Refinement with Iterative DEnoising), which employs a lightweight masked diffusion module at the activation interface to jointly predict and progressively refine activation signals across the window. Extensive experiments on diverse streaming benchmarks and downstream models demonstrate that STRIDE shows more reliable and temporally coherent proactive responses, significantly improving when-to-speak decision quality in online streaming scenarios.
STRIDE:流式视频理解中何时发言与序列去噪的结合 / STRIDE: When to Speak Meets Sequence Denoising for Streaming Video Understanding
这篇论文提出了一种名为STRIDE的新方法,它通过一个轻量级的序列去噪模块,让AI在观看实时流式视频时,能更准确、更连贯地判断出应该在哪个最佳时机主动发言或做出反应。
源自 arXiv: 2603.27593