菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-10
📄 Abstract - MissBench: Benchmarking Multimodal Affective Analysis under Imbalanced Missing Modalities

Multimodal affective computing underpins key tasks such as sentiment analysis and emotion recognition. Standard evaluations, however, often assume that textual, acoustic, and visual modalities are equally available. In real applications, some modalities are systematically more fragile or expensive, creating imbalanced missing rates and training biases that task-level metrics alone do not reveal. We introduce MissBench, a benchmark and framework for multimodal affective tasks that standardizes both shared and imbalanced missing-rate protocols on four widely used sentiment and emotion datasets. MissBench also defines two diagnostic metrics. The Modality Equity Index (MEI) measures how fairly different modalities contribute across missing-modality configurations. The Modality Learning Index (MLI) quantifies optimization imbalance by comparing modality-specific gradient norms during training, aggregated across modality-related modules. Experiments on representative method families show that models that appear robust under shared missing rates can still exhibit marked modality inequity and optimization imbalance under imbalanced conditions. These findings position MissBench, together with MEI and MLI, as practical tools for stress-testing and analyzing multimodal affective models in realistic incomplete-modality this http URL reproducibility, we release our code at: this https URL

顶级标签: multi-modal benchmark model evaluation
详细标签: affective computing missing modalities modality imbalance sentiment analysis emotion recognition 或 搜索:

MissBench:不平衡缺失模态下的多模态情感分析基准测试 / MissBench: Benchmarking Multimodal Affective Analysis under Imbalanced Missing Modalities


1️⃣ 一句话总结

这篇论文提出了一个名为MissBench的基准测试框架,用于评估多模态情感分析模型在现实场景中不同模态(如文本、声音、图像)缺失率不平衡时的性能,并引入了两个诊断指标来量化模型对不同模态的公平利用程度和训练过程中的优化平衡性。

源自 arXiv: 2603.09874