事件流中的生成式匿名化 / Generative Anonymization in Event Streams
1️⃣ 一句话总结
这篇论文提出了一种新方法,能在保护使用神经形态视觉传感器拍摄的人脸身份隐私的同时,保持视频数据的可用性,解决了隐私保护与数据效用之间的核心矛盾。
Neuromorphic vision sensors offer low latency and high dynamic range, but their deployment in public spaces raises severe data protection concerns. Recent Event-to-Video (E2V) models can reconstruct high-fidelity intensity images from sparse event streams, inadvertently exposing human identities. Current obfuscation methods, such as masking or scrambling, corrupt the spatio-temporal structure, severely degrading data utility for downstream perception tasks. In this paper, to the best of our knowledge, we present the first generative anonymization framework for event streams to resolve this utility-privacy trade-off. By bridging the modality gap between asynchronous events and standard spatial generative models, our pipeline projects events into an intermediate intensity representation, leverages pretrained models to synthesize realistic, non-existent identities, and re-encodes the features back into the neuromorphic domain. Experiments demonstrate that our method reliably prevents identity recovery from E2V reconstructions while preserving the structural data integrity required for downstream vision tasks. Finally, to facilitate rigorous evaluation, we introduce a novel, synchronized real-world event and RGB dataset captured via precise robotic trajectories, providing a robust benchmark for future research in privacy-preserving neuromorphic vision.
事件流中的生成式匿名化 / Generative Anonymization in Event Streams
这篇论文提出了一种新方法,能在保护使用神经形态视觉传感器拍摄的人脸身份隐私的同时,保持视频数据的可用性,解决了隐私保护与数据效用之间的核心矛盾。
源自 arXiv: 2604.12803