菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-24
📄 Abstract - A Multimodal Framework for Human-Multi-Agent Interaction

Human-robot interaction is increasingly moving toward multi-robot, socially grounded environments. Existing systems struggle to integrate multimodal perception, embodied expression, and coordinated decision-making in a unified framework. This limits natural and scalable interaction in shared physical spaces. We address this gap by introducing a multimodal framework for human-multi-agent interaction in which each robot operates as an autonomous cognitive agent with integrated multimodal perception and Large Language Model (LLM)-driven planning grounded in embodiment. At the team level, a centralized coordination mechanism regulates turn-taking and agent participation to prevent overlapping speech and conflicting actions. Implemented on two humanoid robots, our framework enables coherent multi-agent interaction through interaction policies that combine speech, gesture, gaze, and locomotion. Representative interaction runs demonstrate coordinated multimodal reasoning across agents and grounded embodied responses. Future work will focus on larger-scale user studies and deeper exploration of socially grounded multi-agent interaction dynamics.

顶级标签: multi-agents robotics systems
详细标签: human-robot interaction multimodal perception llm planning embodied agents coordinated decision-making 或 搜索:

一种用于人与多智能体交互的多模态框架 / A Multimodal Framework for Human-Multi-Agent Interaction


1️⃣ 一句话总结

这篇论文提出了一个让多个机器人像自主智能体一样与人自然交互的统一框架,它通过结合多模态感知和大语言模型规划,并引入集中协调机制,使机器人团队能协调地使用语言、手势和动作进行回应。

源自 arXiv: 2603.23271