超越部分之和:解读多模态仇恨言论检测中的意图转移 / More Than Sum of Its Parts: Deciphering Intent Shifts in Multimodal Hate Speech Detection
1️⃣ 一句话总结
这篇论文针对社交媒体上日益复杂、难以识别的多模态(图文结合)隐性仇恨言论,提出了一个名为ARCADE的新框架,该框架通过模拟法庭辩论过程来深入分析图文间的微妙互动,从而更准确地检测出隐藏的恶意意图。
Combating hate speech on social media is critical for securing cyberspace, yet relies heavily on the efficacy of automated detection systems. As content formats evolve, hate speech is transitioning from solely plain text to complex multimodal expressions, making implicit attacks harder to spot. Current systems, however, often falter on these subtle cases, as they struggle with multimodal content where the emergent meaning transcends the aggregation of individual modalities. To bridge this gap, we move beyond binary classification to characterize semantic intent shifts where modalities interact to construct implicit hate from benign cues or neutralize toxicity through semantic inversion. Guided by this fine-grained formulation, we curate the Hate via Vision-Language Interplay (H-VLI) benchmark where the true intent hinges on the intricate interplay of modalities rather than overt visual or textual slurs. To effectively decipher these complex cues, we further propose the Asymmetric Reasoning via Courtroom Agent DEbate (ARCADE) framework. By simulating a judicial process where agents actively argue for accusation and defense, ARCADE forces the model to scrutinize deep semantic cues before reaching a verdict. Extensive experiments demonstrate that ARCADE significantly outperforms state-of-the-art baselines on H-VLI, particularly for challenging implicit cases, while maintaining competitive performance on established benchmarks. Our code and data are available at: this https URL
超越部分之和:解读多模态仇恨言论检测中的意图转移 / More Than Sum of Its Parts: Deciphering Intent Shifts in Multimodal Hate Speech Detection
这篇论文针对社交媒体上日益复杂、难以识别的多模态(图文结合)隐性仇恨言论,提出了一个名为ARCADE的新框架,该框架通过模拟法庭辩论过程来深入分析图文间的微妙互动,从而更准确地检测出隐藏的恶意意图。
源自 arXiv: 2603.21298