WildRayZer:动态环境中自监督的大视角合成 / WildRayZer: Self-supervised Large View Synthesis in Dynamic Environments
1️⃣ 一句话总结
这篇论文提出了一个名为WildRayZer的自监督框架,它能够从动态视频中学习,有效区分并重建场景中静止的背景和移动的物体,从而在相机和物体都在运动的情况下,仅通过一次前向计算就能生成高质量、无“鬼影”的新视角图像。
We present WildRayZer, a self-supervised framework for novel view synthesis (NVS) in dynamic environments where both the camera and objects move. Dynamic content breaks the multi-view consistency that static NVS models rely on, leading to ghosting, hallucinated geometry, and unstable pose estimation. WildRayZer addresses this by performing an analysis-by-synthesis test: a camera-only static renderer explains rigid structure, and its residuals reveal transient regions. From these residuals, we construct pseudo motion masks, distill a motion estimator, and use it to mask input tokens and gate loss gradients so supervision focuses on cross-view background completion. To enable large-scale training and evaluation, we curate Dynamic RealEstate10K (D-RE10K), a real-world dataset of 15K casually captured dynamic sequences, and D-RE10K-iPhone, a paired transient and clean benchmark for sparse-view transient-aware NVS. Experiments show that WildRayZer consistently outperforms optimization-based and feed-forward baselines in both transient-region removal and full-frame NVS quality with a single feed-forward pass.
WildRayZer:动态环境中自监督的大视角合成 / WildRayZer: Self-supervised Large View Synthesis in Dynamic Environments
这篇论文提出了一个名为WildRayZer的自监督框架,它能够从动态视频中学习,有效区分并重建场景中静止的背景和移动的物体,从而在相机和物体都在运动的情况下,仅通过一次前向计算就能生成高质量、无“鬼影”的新视角图像。
源自 arXiv: 2601.10716