CloDS:未知条件下的纯视觉无监督布料动力学学习 / CloDS: Visual-Only Unsupervised Cloth Dynamics Learning in Unknown Conditions
1️⃣ 一句话总结
这篇论文提出了一种名为CloDS的新方法,它仅通过多视角视频就能无监督地学习布料的动态行为,无需事先知道布料的物理属性,并能有效处理布料的大变形和自遮挡问题。
Deep learning has demonstrated remarkable capabilities in simulating complex dynamic systems. However, existing methods require known physical properties as supervision or inputs, limiting their applicability under unknown conditions. To explore this challenge, we introduce Cloth Dynamics Grounding (CDG), a novel scenario for unsupervised learning of cloth dynamics from multi-view visual observations. We further propose Cloth Dynamics Splatting (CloDS), an unsupervised dynamic learning framework designed for CDG. CloDS adopts a three-stage pipeline that first performs video-to-geometry grounding and then trains a dynamics model on the grounded meshes. To cope with large non-linear deformations and severe self-occlusions during grounding, we introduce a dual-position opacity modulation that supports bidirectional mapping between 2D observations and 3D geometry via mesh-based Gaussian splatting in video-to-geometry grounding stage. It jointly considers the absolute and relative position of Gaussian components. Comprehensive experimental evaluations demonstrate that CloDS effectively learns cloth dynamics from visual data while maintaining strong generalization capabilities for unseen configurations. Our code is available at this https URL. Visualization results are available at this https URL}.%\footnote{As in this example.
CloDS:未知条件下的纯视觉无监督布料动力学学习 / CloDS: Visual-Only Unsupervised Cloth Dynamics Learning in Unknown Conditions
这篇论文提出了一种名为CloDS的新方法,它仅通过多视角视频就能无监督地学习布料的动态行为,无需事先知道布料的物理属性,并能有效处理布料的大变形和自遮挡问题。
源自 arXiv: 2602.01844