菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-06
📄 Abstract - Embodied Referring Expression Comprehension in Human-Robot Interaction

As robots enter human workspaces, there is a crucial need for them to comprehend embodied human instructions, enabling intuitive and fluent human-robot interaction (HRI). However, accurate comprehension is challenging due to a lack of large-scale datasets that capture natural embodied interactions in diverse HRI settings. Existing datasets suffer from perspective bias, single-view collection, inadequate coverage of nonverbal gestures, and a predominant focus on indoor environments. To address these issues, we present the Refer360 dataset, a large-scale dataset of embodied verbal and nonverbal interactions collected across diverse viewpoints in both indoor and outdoor settings. Additionally, we introduce MuRes, a multimodal guided residual module designed to improve embodied referring expression comprehension. MuRes acts as an information bottleneck, extracting salient modality-specific signals and reinforcing them into pre-trained representations to form complementary features for downstream tasks. We conduct extensive experiments on four HRI datasets, including the Refer360 dataset, and demonstrate that current multimodal models fail to capture embodied interactions comprehensively; however, augmenting them with MuRes consistently improves performance. These findings establish Refer360 as a valuable benchmark and exhibit the potential of guided residual learning to advance embodied referring expression comprehension in robots operating within human environments.

顶级标签: robotics multi-modal agents
详细标签: embodied ai human-robot interaction referring expression comprehension multimodal dataset residual learning 或 搜索:

人机交互中的具身指代表达理解 / Embodied Referring Expression Comprehension in Human-Robot Interaction


1️⃣ 一句话总结

这篇论文为了解决机器人理解人类在真实环境中结合语言和手势的指令的难题,创建了一个包含室内外多视角互动的大规模数据集Refer360,并提出了一个名为MuRes的多模态引导残差模块,能有效提升现有模型对这类具身指令的理解能力。


源自 arXiv: 2512.06558