StereoWorld:一种具有几何感知能力的单目转立体视频生成方法 / StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation
1️⃣ 一句话总结
这篇论文提出了一种名为StereoWorld的新方法,它能够将普通的单视角视频自动转换成高质量、具有真实立体感的视频,通过引入几何感知的约束来确保生成视频的3D结构准确性和视觉逼真度。
The growing adoption of XR devices has fueled strong demand for high-quality stereo video, yet its production remains costly and artifact-prone. To address this challenge, we present StereoWorld, an end-to-end framework that repurposes a pretrained video generator for high-fidelity monocular-to-stereo video generation. Our framework jointly conditions the model on the monocular video input while explicitly supervising the generation with a geometry-aware regularization to ensure 3D structural fidelity. A spatio-temporal tiling scheme is further integrated to enable efficient, high-resolution synthesis. To enable large-scale training and evaluation, we curate a high-definition stereo video dataset containing over 11M frames aligned to natural human interpupillary distance (IPD). Extensive experiments demonstrate that StereoWorld substantially outperforms prior methods, generating stereo videos with superior visual fidelity and geometric consistency. The project webpage is available at this https URL.
StereoWorld:一种具有几何感知能力的单目转立体视频生成方法 / StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation
这篇论文提出了一种名为StereoWorld的新方法,它能够将普通的单视角视频自动转换成高质量、具有真实立体感的视频,通过引入几何感知的约束来确保生成视频的3D结构准确性和视觉逼真度。
源自 arXiv: 2512.09363