菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-29
📄 Abstract - What You Feel Is Not What They See: On Predicting Self-Reported Emotion from Third-Party Observer Labels

Self-reported emotion labels capture internal experience, while third-party labels reflect external perception. These perspectives often diverge, limiting the applicability of third-party-trained models to self-report contexts. This gap is critical in mental health, where accurate self-report modeling is essential for guiding intervention. We present the first cross-corpus evaluation of third-party-trained models on self-reports. We find activation unpredictable (CCC approximately 0) and valence moderately predictable (CCC approximately 0.3). Crucially, when content is personally significant to the speaker, models achieve high performance for valence (CCC approximately 0.6-0.8). Our findings point to personal significance as a key pathway for aligning external perception with internal experience and underscore the challenge of self-report activation modeling.

顶级标签: natural language processing machine learning medical
详细标签: emotion recognition self-report modeling cross-corpus evaluation mental health valence prediction 或 搜索:

你所感受到的并非他人所见:基于第三方观察者标签预测自我报告情绪的研究 / What You Feel Is Not What They See: On Predicting Self-Reported Emotion from Third-Party Observer Labels


1️⃣ 一句话总结

这项研究发现,基于外部观察者数据训练的模型难以准确预测个人的自我情绪报告,但当谈论内容对说话者本人有重要意义时,模型对情绪愉悦度的预测效果会显著提升,这为弥合内心感受与外部感知的差距提供了关键线索。

源自 arXiv: 2601.21130