菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-02
📄 Abstract - AdaGaR: Adaptive Gabor Representation for Dynamic Scene Reconstruction

Reconstructing dynamic 3D scenes from monocular videos requires simultaneously capturing high-frequency appearance details and temporally continuous motion. Existing methods using single Gaussian primitives are limited by their low-pass filtering nature, while standard Gabor functions introduce energy instability. Moreover, lack of temporal continuity constraints often leads to motion artifacts during interpolation. We propose AdaGaR, a unified framework addressing both frequency adaptivity and temporal continuity in explicit dynamic scene modeling. We introduce Adaptive Gabor Representation, extending Gaussians through learnable frequency weights and adaptive energy compensation to balance detail capture and stability. For temporal continuity, we employ Cubic Hermite Splines with Temporal Curvature Regularization to ensure smooth motion evolution. An Adaptive Initialization mechanism combining depth estimation, point tracking, and foreground masks establishes stable point cloud distributions in early training. Experiments on Tap-Vid DAVIS demonstrate state-of-the-art performance (PSNR 35.49, SSIM 0.9433, LPIPS 0.0723) and strong generalization across frame interpolation, depth consistency, video editing, and stereo view synthesis. Project page: this https URL

顶级标签: computer vision multi-modal model training
详细标签: dynamic scene reconstruction 3d gaussian splatting temporal continuity frequency adaptivity video interpolation 或 搜索:

AdaGaR:用于动态场景重建的自适应Gabor表示 / AdaGaR: Adaptive Gabor Representation for Dynamic Scene Reconstruction


1️⃣ 一句话总结

这篇论文提出了一种名为AdaGaR的新方法,它通过引入可学习的频率权重和自适应能量补偿来改进3D高斯表示,并结合时间平滑约束,从而能够从单目视频中更清晰、更稳定地重建出细节丰富且运动连贯的动态3D场景。

源自 arXiv: 2601.00796