菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-21
📄 Abstract - InHabit: Leveraging Image Foundation Models for Scalable 3D Human Placement

Training embodied agents to understand 3D scenes as humans do requires large-scale data of people meaningfully interacting with diverse environments, yet such data is scarce. Real-world motion capture is costly and limited to controlled settings, while existing synthetic datasets rely on simple geometric heuristics that ignore rich scene context. In contrast, 2D foundation models trained on internet-scale data have implicitly acquired commonsense knowledge of human-environment interactions. To transfer this knowledge into 3D, we introduce InHabit, a fully automatic and scalable data generator for populating 3D scenes with interacting humans. InHabit follows a render-generate-lift principle: given a rendered 3D scene, a vision-language model proposes contextually meaningful actions, an image-editing model inserts a human, and an optimization procedure lifts the edited result into physically plausible SMPL-X bodies aligned with the scene geometry. Applied to Habitat-Matterport3D, InHabit produces the first large-scale photorealistic 3D human-scene interaction dataset, containing 78K samples across 800 building-scale scenes with complete 3D geometry, SMPL-X bodies, and RGB images. Augmenting standard training data with our samples improves RGB-based 3D human-scene reconstruction and contact estimation, and in a perceptual user study our data is preferred in 78% of cases over the state of the art.

顶级标签: computer vision 3d data
详细标签: human-scene interaction dataset generation foundation models smpl-x 3d reconstruction 或 搜索:

InHabit:利用图像基础模型实现可扩展的3D人物放置 / InHabit: Leveraging Image Foundation Models for Scalable 3D Human Placement


1️⃣ 一句话总结

本文提出一种名为InHabit的全自动数据生成方法,通过利用2D视觉语言模型和图像编辑模型的常识知识,将人物自然地放入3D场景中,从而大规模创建带有逼真人物交互的3D数据集,显著提升了3D人物场景重建和接触估计的性能。

源自 arXiv: 2604.19673