菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-22
📄 Abstract - LiFR-Seg: Anytime High-Frame-Rate Segmentation via Event-Guided Propagation

Dense semantic segmentation in dynamic environments is fundamentally limited by the low-frame-rate (LFR) nature of standard cameras, which creates critical perceptual gaps between frames. To solve this, we introduce Anytime Interframe Semantic Segmentation: a new task for predicting segmentation at any arbitrary time using only a single past RGB frame and a stream of asynchronous event data. This task presents a core challenge: how to robustly propagate dense semantic features using a motion field derived from sparse and often noisy event data, all while mitigating feature degradation in highly dynamic scenes. We propose LiFR-Seg, a novel framework that directly addresses these challenges by propagating deep semantic features through time. The core of our method is an uncertainty-aware warping process, guided by an event-driven motion field and its learned, explicit confidence. A temporal memory attention module further ensures coherence in dynamic scenarios. We validate our method on the DSEC dataset and a new high-frequency synthetic benchmark (SHF-DSEC) we contribute. Remarkably, our LFR system achieves performance (73.82% mIoU on DSEC) that is statistically indistinguishable from an HFR upper-bound (within 0.09%) that has full access to the target frame. This work presents a new, efficient paradigm for achieving robust, high-frame-rate perception with low-frame-rate hardware. Project Page: this https URL Code: this https URL.

顶级标签: computer vision systems model evaluation
详细标签: semantic segmentation event cameras temporal propagation high-frame-rate uncertainty-aware warping 或 搜索:

LiFR-Seg:通过事件引导传播实现任意时刻的高帧率分割 / LiFR-Seg: Anytime High-Frame-Rate Segmentation via Event-Guided Propagation


1️⃣ 一句话总结

这篇论文提出了一种名为LiFR-Seg的新方法,它仅利用一个过去的RGB图像帧和异步事件流,就能在任意时刻预测出密集的语义分割结果,从而用低帧率硬件实现了媲美高帧率系统的感知性能。

源自 arXiv: 2603.21115