基于运动感知高斯分组的动态场景时空预测 / Space-Time Forecasting of Dynamic Scenes with Motion-aware Gaussian Grouping
1️⃣ 一句话总结
这篇论文提出了一种名为MoGaF的新方法,它通过将场景中的物体按运动模式分组并进行优化,来更准确地预测动态场景的长期未来变化,从而生成更真实、更稳定的未来场景画面。
Forecasting dynamic scenes remains a fundamental challenge in computer vision, as limited observations make it difficult to capture coherent object-level motion and long-term temporal evolution. We present Motion Group-aware Gaussian Forecasting (MoGaF), a framework for long-term scene extrapolation built upon the 4D Gaussian Splatting representation. MoGaF introduces motion-aware Gaussian grouping and group-wise optimization to enforce physically consistent motion across both rigid and non-rigid regions, yielding spatially coherent dynamic representations. Leveraging this structured space-time representation, a lightweight forecasting module predicts future motion, enabling realistic and temporally stable scene evolution. Experiments on synthetic and real-world datasets demonstrate that MoGaF consistently outperforms existing baselines in rendering quality, motion plausibility, and long-term forecasting stability. Our project page is available at this https URL
基于运动感知高斯分组的动态场景时空预测 / Space-Time Forecasting of Dynamic Scenes with Motion-aware Gaussian Grouping
这篇论文提出了一种名为MoGaF的新方法,它通过将场景中的物体按运动模式分组并进行优化,来更准确地预测动态场景的长期未来变化,从而生成更真实、更稳定的未来场景画面。
源自 arXiv: 2602.21668