AULLM++:利用大语言模型进行结构推理的微表情识别 / AULLM++: Structural Reasoning with Large Language Models for Micro-Expression Recognition
1️⃣ 一句话总结
这篇论文提出了一个名为AULLM++的新框架,它巧妙地利用大语言模型进行推理,通过融合多粒度视觉证据和建模面部动作单元之间的结构关系,显著提升了从细微面部肌肉运动中识别微表情的准确性和泛化能力。
Micro-expression Action Unit (AU) detection identifies localized AUs from subtle facial muscle activations, providing a foundation for decoding affective cues. Previous methods face three key limitations: (1) heavy reliance on low-density visual information, rendering discriminative evidence vulnerable to background noise; (2) coarse-grained feature processing that misaligns with the demand for fine-grained representations; and (3) neglect of inter-AU correlations, restricting the parsing of complex expression patterns. We propose AULLM++, a reasoning-oriented framework leveraging Large Language Models (LLMs), which injects visual features into textual prompts as actionable semantic premises to guide inference. It formulates AU prediction into three stages: evidence construction, structure modeling, and deduction-based prediction. Specifically, a Multi-Granularity Evidence-Enhanced Fusion Projector (MGE-EFP) fuses mid-level texture cues with high-level semantics, distilling them into a compact Content Token (CT). Furthermore, inspired by micro- and macro-expression AU correspondence, we encode AU relationships as a sparse structural prior and learn interaction strengths via a Relation-Aware AU Graph Neural Network (R-AUGNN), producing an Instruction Token (IT). We then fuse CT and IT into a structured textual prompt and introduce Counterfactual Consistency Regularization (CCR) to construct counterfactual samples, enhancing the model's generalization. Extensive experiments demonstrate AULLM++ achieves state-of-the-art performance on standard benchmarks and exhibits superior cross-domain generalization.
AULLM++:利用大语言模型进行结构推理的微表情识别 / AULLM++: Structural Reasoning with Large Language Models for Micro-Expression Recognition
这篇论文提出了一个名为AULLM++的新框架,它巧妙地利用大语言模型进行推理,通过融合多粒度视觉证据和建模面部动作单元之间的结构关系,显著提升了从细微面部肌肉运动中识别微表情的准确性和泛化能力。
源自 arXiv: 2603.08387