基于跨模态依据迁移的社交媒体可解释人道主义分类 / Cross-Modal Rationale Transfer for Explainable Humanitarian Classification on Social Media
1️⃣ 一句话总结
这篇论文提出了一种新的多模态分类框架,它通过让文本和图像互相‘解释’对方,不仅能更准确地从社交媒体信息中识别出人道主义事件类别,还能自动生成人类可理解的图文依据,同时大幅减少了人工标注的工作量。
Advances in social media data dissemination enable the provision of real-time information during a crisis. The information comes from different classes, such as infrastructure damages, persons missing or stranded in the affected zone, etc. Existing methods attempted to classify text and images into various humanitarian categories, but their decision-making process remains largely opaque, which affects their deployment in real-life applications. Recent work has sought to improve transparency by extracting textual rationales from tweets to explain predicted classes. However, such explainable classification methods have mostly focused on text, rather than crisis-related images. In this paper, we propose an interpretable-by-design multimodal classification framework. Our method first learns the joint representation of text and image using a visual language transformer model and extracts text rationales. Next, it extracts the image rationales via the mapping with text rationales. Our approach demonstrates how to learn rationales in one modality from another through cross-modal rationale transfer, which saves annotation effort. Finally, tweets are classified based on extracted rationales. Experiments are conducted over CrisisMMD benchmark dataset, and results show that our proposed method boosts the classification Macro-F1 by 2-35% while extracting accurate text tokens and image patches as rationales. Human evaluation also supports the claim that our proposed method is able to retrieve better image rationale patches (12%) that help to identify humanitarian classes. Our method adapts well to new, unseen datasets in zero-shot mode, achieving an accuracy of 80%.
基于跨模态依据迁移的社交媒体可解释人道主义分类 / Cross-Modal Rationale Transfer for Explainable Humanitarian Classification on Social Media
这篇论文提出了一种新的多模态分类框架,它通过让文本和图像互相‘解释’对方,不仅能更准确地从社交媒体信息中识别出人道主义事件类别,还能自动生成人类可理解的图文依据,同时大幅减少了人工标注的工作量。
源自 arXiv: 2603.18611