EventVGGT:探索跨模态蒸馏以实现基于事件的一致性深度估计 / EventVGGT: Exploring Cross-Modal Distillation for Consistent Event-based Depth Estimation
1️⃣ 一句话总结
本文提出了一种名为EventVGGT的新方法,通过将事件数据视为连续视频序列,并首次从视觉基础模型中提取时空和几何先验知识,有效解决了现有事件相机深度估计方法因忽略时间连续性而导致结果不一致、不准确的问题。
Event cameras offer superior sensitivity to high-speed motion and extreme lighting, making event-based monocular depth estimation a promising approach for robust 3D perception in challenging conditions. However, progress is severely hindered by the scarcity of dense depth annotations. While recent annotation-free approaches mitigate this by distilling knowledge from Vision Foundation Models (VFMs), a critical limitation persists: they process event streams as independent frames. By neglecting the inherent temporal continuity of event data, these methods fail to leverage the rich temporal priors encoded in VFMs, ultimately yielding temporally inconsistent and less accurate depth predictions. To address this, we introduce EventVGGT, a novel framework that explicitly models the event stream as a coherent video sequence. To the best of our knowledge, we are the first to distill spatio-temporal and multi-view geometric priors from the Visual Geometry Grounded Transformer (VGGT) into the event domain. We achieve this via a comprehensive tri-level distillation strategy: (i) Cross-Modal Feature Mixture (CMFM) bridges the modality gap at the output level by fusing RGB and event features to generate auxiliary depth predictions; (ii) Spatio-Temporal Feature Distillation (STFD) distills VGGT's powerful spatio-temporal representations at the feature level; and (iii) Temporal Consistency Distillation (TCD) enforces cross-frame coherence at the temporal level by aligning inter-frame depth changes. Extensive experiments demonstrate that EventVGGT consistently outperforms existing methods -- reducing the absolute mean depth error at 30m by over 53\% on EventScape (from 2.30 to 1.06) -- while exhibiting robust zero-shot generalization on the unseen DENSE and MVSEC datasets.
EventVGGT:探索跨模态蒸馏以实现基于事件的一致性深度估计 / EventVGGT: Exploring Cross-Modal Distillation for Consistent Event-based Depth Estimation
本文提出了一种名为EventVGGT的新方法,通过将事件数据视为连续视频序列,并首次从视觉基础模型中提取时空和几何先验知识,有效解决了现有事件相机深度估计方法因忽略时间连续性而导致结果不一致、不准确的问题。
源自 arXiv: 2603.09385