📄
Abstract - Explainability of Recurrent Neural Networks for Enhancing P300-based Brain-Computer Interfaces
Brain-Computer Interfaces (BCIs) based on P300 event-related potentials offer promising applications in health, education, and assistive technologies. However, challenges related to inter- and intra-subject variability and the explainability of Deep Learning (DL) models limit their practical deployment. In this work, we present the Post-Recurrent Module (PRM), an additional layer designed to improve both performance and transparency, incorporated into a Recurrent Neural Network (RNN) architecture for classifying P300 signals from EEG data. Our approach enables a dual analysis of spatio-temporal signals through both global and local explainability techniques, allowing us not only to identify the most relevant brain regions and critical time intervals involved in classification, but also to interpret model decisions in terms of spatio-temporal EEG patterns consistent with well-stablished neurophysiological descriptions of the P300. Experimental results show a 9\% improvement in performance over state of the art, while also revealing the importance of inter- and intra-subject variability, in alignment with established neuroscience literature. By making model decisions transparent and efficient, we present a framework for explainable EEG-based models. This framework is not limited to more efficient P300 detection, but can be generalized to a wide range of EEG-based tasks. Its ability to identify key spatial and temporal features makes it suitable for applications such as motor imagery, steady-state visual evoked potentials, and even cognitive workload assessment.
递归神经网络的可解释性在增强基于P300的脑机接口中的应用 /
Explainability of Recurrent Neural Networks for Enhancing P300-based Brain-Computer Interfaces
1️⃣ 一句话总结
本文提出了一种名为后递归模块(PRM)的附加网络层,将其嵌入递归神经网络中,在提升P300脑电信号分类准确率9%的同时,通过全局与局部可解释性技术清晰揭示模型决策依赖的关键脑区和时间窗口,使深度学习模型的行为符合神经科学认知,从而为构建透明、高效且可推广的脑电分析系统提供了新框架。