菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-16
📄 Abstract - CRISP: Contact-Guided Real2Sim from Monocular Video with Planar Scene Primitives

We introduce CRISP, a method that recovers simulatable human motion and scene geometry from monocular video. Prior work on joint human-scene reconstruction relies on data-driven priors and joint optimization with no physics in the loop, or recovers noisy geometry with artifacts that cause motion tracking policies with scene interactions to fail. In contrast, our key insight is to recover convex, clean, and simulation-ready geometry by fitting planar primitives to a point cloud reconstruction of the scene, via a simple clustering pipeline over depth, normals, and flow. To reconstruct scene geometry that might be occluded during interactions, we make use of human-scene contact modeling (e.g., we use human posture to reconstruct the occluded seat of a chair). Finally, we ensure that human and scene reconstructions are physically-plausible by using them to drive a humanoid controller via reinforcement learning. Our approach reduces motion tracking failure rates from 55.2\% to 6.9\% on human-centric video benchmarks (EMDB, PROX), while delivering a 43\% faster RL simulation throughput. We further validate it on in-the-wild videos including casually-captured videos, Internet videos, and even Sora-generated videos. This demonstrates CRISP's ability to generate physically-valid human motion and interaction environments at scale, greatly advancing real-to-sim applications for robotics and AR/VR.

顶级标签: computer vision robotics multi-modal
详细标签: human-scene reconstruction real-to-sim motion tracking planar primitives physics simulation 或 搜索:

CRISP:基于平面场景基元与接触引导的单目视频真实到仿真重建方法 / CRISP: Contact-Guided Real2Sim from Monocular Video with Planar Scene Primitives


1️⃣ 一句话总结

这篇论文提出了一种名为CRISP的新方法,能够从普通单目视频中重建出可用于物理仿真的逼真人动作和场景几何,通过结合平面基元拟合与人体-场景接触建模,显著提升了动作追踪的准确性和仿真效率,为机器人及AR/VR应用提供了高质量的仿真数据生成方案。


源自 arXiv: 2512.14696