菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-20
📄 Abstract - Match-Any-Events: Zero-Shot Motion-Robust Feature Matching Across Wide Baselines for Event Cameras

Event cameras have recently shown promising capabilities in instantaneous motion estimation due to their robustness to low light and fast motions. However, computing wide-baseline correspondence between two arbitrary views remains a significant challenge, since event appearance changes substantially with motion, and learning-based approaches are constrained by both scalability and limited wide-baseline supervision. We therefore introduce the first event matching model that achieves cross-dataset wide-baseline correspondence in a zero-shot manner: a single model trained once is deployed on unseen datasets without any target-domain fine-tuning or adaptation. To enable this capability, we introduce a motion-robust and computationally efficient attention backbone that learns multi-timescale features from event streams, augmented with sparsity-aware event token selection, making large-scale training on diverse wide-baseline supervision computationally feasible. To provide the supervision needed for wide-baseline generalization, we develop a robust event motion synthesis framework to generate large-scale event-matching datasets with augmented viewpoints, modalities, and motions. Extensive experiments across multiple benchmarks show that our framework achieves a 37.7% improvement over the previous best event feature matching methods. Code and data are available at: this https URL.

顶级标签: computer vision machine learning
详细标签: event cameras feature matching zero-shot wide-baseline motion robustness 或 搜索:

任意事件匹配:面向事件相机的零样本运动鲁棒宽基线特征匹配 / Match-Any-Events: Zero-Shot Motion-Robust Feature Matching Across Wide Baselines for Event Cameras


1️⃣ 一句话总结

本文提出首个能零样本跨数据集完成宽基线事件匹配的模型,通过设计运动鲁棒且稀疏性感知的注意力网络,并合成大规模多视角事件数据集,在未见过的场景中比现有方法提升37.7%的匹配精度。

源自 arXiv: 2604.18744