📄
Abstract - Enhance-then-Balance Modality Collaboration for Robust Multimodal Sentiment Analysis
Multimodal sentiment analysis (MSA) integrates heterogeneous text, audio, and visual signals to infer human emotions. While recent approaches leverage cross-modal complementarity, they often struggle to fully utilize weaker modalities. In practice, dominant modalities tend to overshadow non-verbal ones, inducing modality competition and limiting overall contributions. This imbalance degrades fusion performance and robustness under noisy or missing modalities. To address this, we propose a novel model, Enhance-then-Balance Modality Collaboration framework (EBMC). EBMC improves representation quality via semantic disentanglement and cross-modal enhancement, strengthening weaker modalities. To prevent dominant modalities from overwhelming others, an Energy-guided Modality Coordination mechanism achieves implicit gradient rebalancing via a differentiable equilibrium objective. Furthermore, Instance-aware Modality Trust Distillation estimates sample-level reliability to adaptively modulate fusion weights, ensuring robustness. Extensive experiments demonstrate that EBMC achieves state-of-the-art or competitive results and maintains strong performance under missing-modality settings.
增强后平衡的多模态协作:一种鲁棒的多模态情感分析方法 /
Enhance-then-Balance Modality Collaboration for Robust Multimodal Sentiment Analysis
1️⃣ 一句话总结
这篇论文提出了一种名为EBMC的新模型,它通过先增强弱势模态(如声音、图像)的表达能力,再平衡各模态之间的协作关系,有效解决了多模态情感分析中强势模态(如文本)压制其他模态的问题,从而在数据有噪声或部分模态缺失的情况下,依然能保持准确和鲁棒的情感判断。