一秒内实现锐利的单目视图合成 / Sharp Monocular View Synthesis in Less Than a Second
1️⃣ 一句话总结
这篇论文提出了一个名为SHARP的新方法,它能在不到一秒的时间内,仅凭一张照片就快速生成一个精确的3D场景模型,并能实时渲染出高质量、逼真的新视角图像,其效果和速度都远超以往的技术。
We present SHARP, an approach to photorealistic view synthesis from a single image. Given a single photograph, SHARP regresses the parameters of a 3D Gaussian representation of the depicted scene. This is done in less than a second on a standard GPU via a single feedforward pass through a neural network. The 3D Gaussian representation produced by SHARP can then be rendered in real time, yielding high-resolution photorealistic images for nearby views. The representation is metric, with absolute scale, supporting metric camera movements. Experimental results demonstrate that SHARP delivers robust zero-shot generalization across datasets. It sets a new state of the art on multiple datasets, reducing LPIPS by 25-34% and DISTS by 21-43% versus the best prior model, while lowering the synthesis time by three orders of magnitude. Code and weights are provided at this https URL
一秒内实现锐利的单目视图合成 / Sharp Monocular View Synthesis in Less Than a Second
这篇论文提出了一个名为SHARP的新方法,它能在不到一秒的时间内,仅凭一张照片就快速生成一个精确的3D场景模型,并能实时渲染出高质量、逼真的新视角图像,其效果和速度都远超以往的技术。
源自 arXiv: 2512.10685