菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-25
📄 Abstract - iMiGUE-Speech: A Spontaneous Speech Dataset for Affective Analysis

This work presents iMiGUE-Speech, an extension of the iMiGUE dataset that provides a spontaneous affective corpus for studying emotional and affective states. The new release focuses on speech and enriches the original dataset with additional metadata, including speech transcripts, speaker-role separation between interviewer and interviewee, and word-level forced alignments. Unlike existing emotional speech datasets that rely on acted or laboratory-elicited emotions, iMiGUE-Speech captures spontaneous affect arising naturally from real match outcomes. To demonstrate the utility of the dataset and establish initial benchmarks, we introduce two evaluation tasks for comparative assessment: speech emotion recognition and transcript-based sentiment analysis. These tasks leverage state-of-the-art pre-trained representations to assess the dataset's ability to capture spontaneous affective states from both acoustic and linguistic modalities. iMiGUE-Speech can also be synchronously paired with micro-gesture annotations from the original iMiGUE dataset, forming a uniquely multimodal resource for studying speech-gesture affective dynamics. The extended dataset is available at this https URL.

顶级标签: audio multi-modal data
详细标签: speech emotion recognition affective computing spontaneous speech multimodal dataset sentiment analysis 或 搜索:

iMiGUE-Speech:一个用于情感分析的自发性语音数据集 / iMiGUE-Speech: A Spontaneous Speech Dataset for Affective Analysis


1️⃣ 一句话总结

这篇论文发布了一个名为iMiGUE-Speech的新数据集,它通过记录人们在真实比赛结果后的自然对话来捕捉自发情感,为研究语音和文本中的真实情绪提供了宝贵资源,并可用于多模态情感分析。

源自 arXiv: 2602.21464