EventHub:无需主动传感器的通用事件立体视觉网络数据工厂 / EventHub: Data Factory for Generalizable Event-Based Stereo Networks without Active Sensors
1️⃣ 一句话总结
这篇论文提出了一个名为EventHub的新框架,它能够仅利用普通彩色图像生成训练数据,从而训练出无需昂贵传感器标注、且泛化能力出色的基于事件的立体视觉深度网络。
We propose EventHub, a novel framework for training deep-event stereo networks without ground truth annotations from costly active sensors, relying instead on standard color images. From these images, we derive either proxy annotations and proxy events through state-of-the-art novel view synthesis techniques, or simply proxy annotations when images are already paired with event data. Using the training set generated by our data factory, we repurpose state-of-the-art stereo models from RGB literature to process event data, obtaining new event stereo models with unprecedented generalization capabilities. Experiments on widely used event stereo datasets support the effectiveness of EventHub and show how the same data distillation mechanism can improve the accuracy of RGB stereo foundation models in challenging conditions such as nighttime scenes.
EventHub:无需主动传感器的通用事件立体视觉网络数据工厂 / EventHub: Data Factory for Generalizable Event-Based Stereo Networks without Active Sensors
这篇论文提出了一个名为EventHub的新框架,它能够仅利用普通彩色图像生成训练数据,从而训练出无需昂贵传感器标注、且泛化能力出色的基于事件的立体视觉深度网络。
源自 arXiv: 2604.02331