菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-03
📄 Abstract - EventFlash: Towards Efficient MLLMs for Event-Based Vision

Event-based multimodal large language models (MLLMs) enable robust perception in high-speed and low-light scenarios, addressing key limitations of frame-based MLLMs. However, current event-based MLLMs often rely on dense image-like processing paradigms, overlooking the spatiotemporal sparsity of event streams and resulting in high computational cost. In this paper, we propose EventFlash, a novel and efficient MLLM to explore spatiotemporal token sparsification for reducing data redundancy and accelerating inference. Technically, we build EventMind, a large-scale and scene-diverse dataset with over 500k instruction sets, providing both short and long event stream sequences to support our curriculum training strategy. We then present an adaptive temporal window aggregation module for efficient temporal sampling, which adaptively compresses temporal tokens while retaining key temporal cues. Finally, a sparse density-guided attention module is designed to improve spatial token efficiency by selecting informative regions and suppressing empty or sparse areas. Experimental results show that EventFlash achieves a $12.4\times$ throughput improvement over the baseline (EventFlash-Zero) while maintaining comparable performance. It supports long-range event stream processing with up to 1,000 bins, significantly outperforming the 5-bin limit of EventGPT. We believe EventFlash serves as an efficient foundation model for event-based vision.

顶级标签: multi-modal computer vision model training
详细标签: event-based vision multimodal llm spatiotemporal sparsity efficient inference instruction tuning 或 搜索:

EventFlash:迈向高效的事件视觉多模态大语言模型 / EventFlash: Towards Efficient MLLMs for Event-Based Vision


1️⃣ 一句话总结

这篇论文提出了一种名为EventFlash的高效新模型,它通过智能压缩事件流数据中的冗余时空信息,在保持良好感知能力的同时,大幅提升了事件视觉大模型的运行速度,使其更适合处理高速、弱光场景下的长序列任务。

源自 arXiv: 2602.03230