因果高斯:利用高斯表征学习3D动态场景的物理因果关系 / CausalGS: Learning Physical Causality of 3D Dynamic Scenes with Gaussian Representations
1️⃣ 一句话总结
本文提出了一种名为CausalGS的框架,仅通过多视角视频就能自动学习复杂3D动态场景中的物理规律和因果关系,无需人工标注或预设物理条件,并能准确预测物体的长期运动轨迹和新视角图像。
Learning a physical model from video data that can comprehend physical laws and predict the future trajectories of objects is a formidable challenge in artificial intelligence. Prior approaches either leverage various Partial Differential Equations (PDEs) as soft constraints in the form of PINN losses, or integrate physics simulators into neural networks; however, they often rely on strong priors or high-quality geometry reconstruction. In this paper, we propose CausalGS, a framework that learns the causal dynamics of complex dynamic 3D scenes solely from multi-view videos, while dispensing with the reliance on explicit priors. At its core is an inverse physics inference module that decouples the complex dynamics problem from the video into the joint inference of two factors: the initial velocity field representing the scene's kinematics, and the intrinsic material properties governing its dynamics. This inferred physical information is then utilized within a differentiable physics simulator to guide the learning process in a physics-regularized manner. Extensive experiments demonstrate that CausalGS surpasses the state-of-the-art on the highly challenging task of long-term future frame extrapolation, while also exhibiting advanced performance in novel view interpolation. Crucially, our work shows that, without any human annotation, the model is able to learn the complex interactions between multiple physical properties and understand the causal relationships driving the scene's dynamic evolution, solely from visual observations.
因果高斯:利用高斯表征学习3D动态场景的物理因果关系 / CausalGS: Learning Physical Causality of 3D Dynamic Scenes with Gaussian Representations
本文提出了一种名为CausalGS的框架,仅通过多视角视频就能自动学习复杂3D动态场景中的物理规律和因果关系,无需人工标注或预设物理条件,并能准确预测物体的长期运动轨迹和新视角图像。
源自 arXiv: 2605.10586