菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-20
📄 Abstract - MEDN: Motion-Emotion Feature Decoupling Network for Micro-Expression Recognition

Unlike macro-expression, micro-expression does not follow a strictly consistent mapping rule between emotions and Action Units (AUs). As a result, some micro-expressions share identical AUs yet represent completely opposite emotional categories, making them highly visually similar. Existing microexpression recognition (MER) methods mostly rely on explicit facial motion cues (e.g., optical flow, frame differences, AU features) while ignoring implicit emotion information. To tackle this issue, this paper presents a Motion Emotion Feature Decoupling Network (MEDN) for MER. We design a dual-branch framework to separately extract motion and emotion features. In the motion branch, an AU-detection task restricts features to the explicit motion domain, and orthogonal loss is adopted to reduce motion emotion feature coupling. For implicit emotion modeling, we propose a Sparse Emotion Vision Transformer (SEVit) that sparsifies spatial tokens to highlight local temporal variations with multi-scale sparsity rates. A Collaborative Fusion Module (CoFM) is further developed to fuse disentangled motion and emotion features adaptively. Extensive experiments on three benchmark datasets validate that MEDN effectively decouples motion and emotion features and achieves superior recognition performance, offering a new perspective for enhancing recognition accuracy and generalization.

顶级标签: computer vision machine learning model training
详细标签: micro-expression recognition feature decoupling action units vision transformer emotion modeling 或 搜索:

运动-情感特征解耦网络用于微表情识别 / MEDN: Motion-Emotion Feature Decoupling Network for Micro-Expression Recognition


1️⃣ 一句话总结

该论文提出了一种名为MEDN的双分支网络,通过将微表情中的面部运动特征和情感特征分离后融合,解决了因不同情绪共享相似动作单元而导致的识别难题,显著提升了微表情识别的准确性和泛化能力。

源自 arXiv: 2604.17899