探索变形物体的6D位姿估计 / Exploring 6D Object Pose Estimation with Deformation
1️⃣ 一句话总结
这篇论文提出了一个名为DeSOPE的大规模数据集,专门用于研究物体在发生形变(如磨损、碰撞)时的6D位姿估计问题,并通过实验发现现有方法在物体形变时性能显著下降,强调了处理形变对实际应用的重要性。
We present DeSOPE, a large-scale dataset for 6DoF deformed objects. Most 6D object pose methods assume rigid or articulated objects, an assumption that fails in practice as objects deviate from their canonical shapes due to wear, impact, or deformation. To model this, we introduce the DeSOPE dataset, which features high-fidelity 3D scans of 26 common object categories, each captured in one canonical state and three deformed configurations, with accurate 3D registration to the canonical mesh. Additionally, it features an RGB-D dataset with 133K frames across diverse scenarios and 665K pose annotations produced via a semi-automatic pipeline. We begin by annotating 2D masks for each instance, then compute initial poses using an object pose method, refine them through an object-level SLAM system, and finally perform manual verification to produce the final annotations. We evaluate several object pose methods and find that performance drops sharply with increasing deformation, suggesting that robust handling of such deformations is critical for practical applications. The project page and dataset are available at this https URL}{this https URL.
探索变形物体的6D位姿估计 / Exploring 6D Object Pose Estimation with Deformation
这篇论文提出了一个名为DeSOPE的大规模数据集,专门用于研究物体在发生形变(如磨损、碰撞)时的6D位姿估计问题,并通过实验发现现有方法在物体形变时性能显著下降,强调了处理形变对实际应用的重要性。
源自 arXiv: 2604.06720