菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-17
📄 Abstract - Laya: A LeJEPA Approach to EEG via Latent Prediction over Reconstruction

Electroencephalography (EEG) is a widely used tool for studying brain function, with applications in clinical neuroscience, diagnosis, and brain-computer interfaces (BCIs). Recent EEG foundation models trained on large unlabeled corpora aim to learn transferable representations, but their effectiveness remains unclear; reported improvements over smaller task-specific models are often modest, sensitive to downstream adaptation and fine-tuning strategies, and limited under linear probing. We hypothesize that one contributing factor is the reliance on signal reconstruction as the primary self-supervised learning (SSL) objective, which biases representations toward high-variance artifacts rather than task-relevant neural structure. To address this limitation, we explore an SSL paradigm based on Joint Embedding Predictive Architectures (JEPA), which learn by predicting latent representations instead of reconstructing raw signals. While earlier JEPA-style methods often rely on additional heuristics to ensure training stability, recent advances such as LeJEPA provide a more principled and stable formulation. We introduce Laya, the first EEG foundation model based on LeJEPA. Across a range of EEG benchmarks, Laya demonstrates improved performance under linear probing compared to reconstruction-based baselines, suggesting that latent predictive objectives offer a promising direction for learning transferable, high-level EEG representations.

顶级标签: medical model training machine learning
详细标签: eeg self-supervised learning representation learning foundation model brain-computer interface 或 搜索:

Laya:一种通过潜在预测而非重建的LeJEPA方法用于脑电图研究 / Laya: A LeJEPA Approach to EEG via Latent Prediction over Reconstruction


1️⃣ 一句话总结

这篇论文提出了一种名为Laya的新型脑电图基础模型,它通过预测潜在表征而非重建原始信号来学习,从而比传统方法更能捕捉与任务相关的脑神经特征,并在多项基准测试中取得了更好的表现。

源自 arXiv: 2603.16281