菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-07
📄 Abstract - Bridging Speech, Emotion, and Motion: a VLM-based Multimodal Edge-deployable Framework for Humanoid Robots

Effective human-robot interaction requires emotionally rich multimodal expressions, yet most humanoid robots lack coordinated speech, facial expressions, and gestures. Meanwhile, real-world deployment demands on-device solutions that can operate autonomously without continuous cloud connectivity. To bridging \underline{\textit{S}}peech, \underline{\textit{E}}motion, and \underline{\textit{M}}otion, we present \textit{SeM$^2$}, a Vision Language Model-based framework that orchestrates emotionally coherent multimodal interactions through three key components: a multimodal perception module capturing user contextual cues, a Chain-of-Thought reasoning for response planning, and a novel Semantic-Sequence Aligning Mechanism (SSAM) that ensures precise temporal coordination between verbal content and physical expressions. We implement both cloud-based and \underline{\textit{e}}dge-deployed versions (\textit{SeM$^2_e$}), with the latter knowledge distilled to operate efficiently on edge hardware while maintaining 95\% of the relative performance. Comprehensive evaluations demonstrate that our approach significantly outperforms unimodal baselines in naturalness, emotional clarity, and modal coherence, advancing socially expressive humanoid robotics for diverse real-world environments.

顶级标签: robotics multi-modal agents
详细标签: human-robot interaction vision language model edge computing emotional coherence multimodal coordination 或 搜索:

连接语音、情感与动作:一种基于视觉语言模型、可部署于边缘的人形机器人多模态交互框架 / Bridging Speech, Emotion, and Motion: a VLM-based Multimodal Edge-deployable Framework for Humanoid Robots


1️⃣ 一句话总结

这篇论文提出了一个名为SeM²的智能框架,它能让机器人像人一样,在说话时自然地协调语音、面部表情和身体动作来表达情感,并且这个框架经过优化后可以直接在机器人自带的设备上高效运行,无需依赖云端。

源自 arXiv: 2602.07434