高效事件相机体系统 / Efficient Event Camera Volume System
1️⃣ 一句话总结
这篇论文提出了一种名为EECVS的新型自适应压缩框架,它通过将事件流建模为连续时间信号并智能选择变换方法,有效解决了事件相机数据稀疏、难以融入机器人标准流程的难题,在保持高计算效率的同时,显著提升了跨场景的泛化能力和下游任务(如分割)的性能。
Event cameras promise low latency and high dynamic range, yet their sparse output challenges integration into standard robotic pipelines. We introduce \nameframew (Efficient Event Camera Volume System), a novel framework that models event streams as continuous-time Dirac impulse trains, enabling artifact-free compression through direct transform evaluation at event timestamps. Our key innovation combines density-driven adaptive selection among DCT, DTFT, and DWT transforms with transform-specific coefficient pruning strategies tailored to each domain's sparsity characteristics. The framework eliminates temporal binning artifacts while automatically adapting compression strategies based on real-time event density analysis. On EHPT-XC and MVSEC datasets, our framework achieves superior reconstruction fidelity with DTFT delivering the lowest earth mover distance. In downstream segmentation tasks, EECVS demonstrates robust generalization. Notably, our approach demonstrates exceptional cross-dataset generalization: when evaluated with EventSAM segmentation, EECVS achieves mean IoU 0.87 on MVSEC versus 0.44 for voxel grids at 24 channels, while remaining competitive on EHPT-XC. Our ROS2 implementation provides real-time deployment with DCT processing achieving 1.5 ms latency and 2.7X higher throughput than alternative transforms, establishing the first adaptive event compression framework that maintains both computational efficiency and superior generalization across diverse robotic scenarios.
高效事件相机体系统 / Efficient Event Camera Volume System
这篇论文提出了一种名为EECVS的新型自适应压缩框架,它通过将事件流建模为连续时间信号并智能选择变换方法,有效解决了事件相机数据稀疏、难以融入机器人标准流程的难题,在保持高计算效率的同时,显著提升了跨场景的泛化能力和下游任务(如分割)的性能。
源自 arXiv: 2603.14738