PhyRPR:无需训练的物理约束视频生成 / PhyRPR: Training-Free Physics-Constrained Video Generation
1️⃣ 一句话总结
这篇论文提出了一种无需额外训练的三阶段视频生成方法,通过将物理推理与视觉合成分离开来,先理解物理状态、再规划运动骨架、最后生成细节,从而让AI生成的视频更符合物理规律且运动可控。
Recent diffusion-based video generation models can synthesize visually plausible videos, yet they often struggle to satisfy physical constraints. A key reason is that most existing approaches remain single-stage: they entangle high-level physical understanding with low-level visual synthesis, making it hard to generate content that require explicit physical reasoning. To address this limitation, we propose a training-free three-stage pipeline,\textit{PhyRPR}:\textit{Phy\uline{R}eason}--\textit{Phy\uline{P}lan}--\textit{Phy\uline{R}efine}, which decouples physical understanding from visual synthesis. Specifically, \textit{PhyReason} uses a large multimodal model for physical state reasoning and an image generator for keyframe synthesis; \textit{PhyPlan} deterministically synthesizes a controllable coarse motion scaffold; and \textit{PhyRefine} injects this scaffold into diffusion sampling via a latent fusion strategy to refine appearance while preserving the planned dynamics. This staged design enables explicit physical control during generation. Extensive experiments under physics constraints show that our method consistently improves physical plausibility and motion controllability.
PhyRPR:无需训练的物理约束视频生成 / PhyRPR: Training-Free Physics-Constrained Video Generation
这篇论文提出了一种无需额外训练的三阶段视频生成方法,通过将物理推理与视觉合成分离开来,先理解物理状态、再规划运动骨架、最后生成细节,从而让AI生成的视频更符合物理规律且运动可控。
源自 arXiv: 2601.09255