菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-29
📄 Abstract - Temporal Context and Architecture: A Benchmark for Naturalistic EEG Decoding

We study how model architecture and temporal context interact in naturalistic EEG decoding. Using the HBN movie-watching dataset, we benchmark five architectures, CNN, LSTM, a stabilized Transformer (EEGXF), S4, and S5, on a 4-class task across segment lengths from 8s to 128s. Accuracy improves with longer context: at 64s, S5 reaches 98.7%+/-0.6 and CNN 98.3%+/-0.3, while S5 uses ~20x fewer parameters than CNN. To probe real-world robustness, we evaluate zero-shot cross-frequency shifts, cross-task OOD inputs, and leave-one-subject-out generalization. S5 achieves stronger cross-subject accuracy but makes over-confident errors on OOD tasks; EEGXF is more conservative and stable under frequency shifts, though less calibrated in-distribution. These results reveal a practical efficiency-robustness trade-off: S5 for parameter-efficient peak accuracy; EEGXF when robustness and conservative uncertainty are critical.

顶级标签: medical model evaluation benchmark
详细标签: eeg decoding temporal context model architecture robustness evaluation neural signal processing 或 搜索:

时间上下文与架构:自然脑电信号解码的基准研究 / Temporal Context and Architecture: A Benchmark for Naturalistic EEG Decoding


1️⃣ 一句话总结

这项研究通过对比不同深度学习模型在分析长时间脑电信号时的表现,发现模型架构与处理时间窗口长度之间存在关键交互,揭示了在追求高精度与保持模型稳健性之间存在明确的取舍关系。

源自 arXiv: 2601.21215